Jan 23 07:55:24 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 07:55:24 crc restorecon[4759]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:24 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 07:55:25 crc restorecon[4759]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 07:55:25 crc kubenswrapper[4982]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 07:55:25 crc kubenswrapper[4982]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 07:55:25 crc kubenswrapper[4982]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 07:55:25 crc kubenswrapper[4982]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 07:55:25 crc kubenswrapper[4982]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 07:55:25 crc kubenswrapper[4982]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.646795 4982 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654532 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654571 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654581 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654591 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654599 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654608 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654617 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654651 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654669 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654686 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654698 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654712 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654723 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654732 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654741 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654752 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654763 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654772 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654780 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654788 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654795 4982 feature_gate.go:330] unrecognized feature gate: Example Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654803 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654811 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654830 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654838 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654847 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654857 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654865 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654873 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654881 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654888 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654896 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654908 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654922 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654932 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654940 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654949 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654957 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654966 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654974 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654982 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654990 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.654999 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655007 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655014 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655022 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655030 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655038 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655046 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655053 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655061 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655068 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655076 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655084 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655094 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655107 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655128 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655141 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655151 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655161 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655171 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655181 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655235 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655254 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655262 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655271 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655287 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655304 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655314 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655322 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.655330 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655490 4982 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655508 4982 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655522 4982 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655534 4982 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655545 4982 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655555 4982 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655567 4982 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655578 4982 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655587 4982 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655596 4982 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655606 4982 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655618 4982 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655656 4982 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655666 4982 flags.go:64] FLAG: --cgroup-root="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655675 4982 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655684 4982 flags.go:64] FLAG: --client-ca-file="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655693 4982 flags.go:64] FLAG: --cloud-config="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655702 4982 flags.go:64] FLAG: --cloud-provider="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655710 4982 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655722 4982 flags.go:64] FLAG: --cluster-domain="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655731 4982 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655740 4982 flags.go:64] FLAG: --config-dir="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655749 4982 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655759 4982 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655770 4982 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655779 4982 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655788 4982 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655798 4982 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655807 4982 flags.go:64] FLAG: --contention-profiling="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655816 4982 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655827 4982 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655837 4982 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655846 4982 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655857 4982 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655866 4982 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655875 4982 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655884 4982 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655893 4982 flags.go:64] FLAG: --enable-server="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655902 4982 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655913 4982 flags.go:64] FLAG: --event-burst="100" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655922 4982 flags.go:64] FLAG: --event-qps="50" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655931 4982 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655939 4982 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655948 4982 flags.go:64] FLAG: --eviction-hard="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655959 4982 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655968 4982 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655977 4982 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655987 4982 flags.go:64] FLAG: --eviction-soft="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.655996 4982 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656005 4982 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656014 4982 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656023 4982 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656032 4982 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656040 4982 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656049 4982 flags.go:64] FLAG: --feature-gates="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656060 4982 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656099 4982 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656111 4982 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656120 4982 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656130 4982 flags.go:64] FLAG: --healthz-port="10248" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656139 4982 flags.go:64] FLAG: --help="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656148 4982 flags.go:64] FLAG: --hostname-override="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656171 4982 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656189 4982 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656199 4982 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656208 4982 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656216 4982 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656225 4982 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656234 4982 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656243 4982 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656252 4982 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656261 4982 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656271 4982 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656279 4982 flags.go:64] FLAG: --kube-reserved="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656288 4982 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656298 4982 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656307 4982 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656316 4982 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656325 4982 flags.go:64] FLAG: --lock-file="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656333 4982 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656343 4982 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656352 4982 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656375 4982 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656385 4982 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656394 4982 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656403 4982 flags.go:64] FLAG: --logging-format="text" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656412 4982 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656422 4982 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656431 4982 flags.go:64] FLAG: --manifest-url="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656439 4982 flags.go:64] FLAG: --manifest-url-header="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656451 4982 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656461 4982 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656472 4982 flags.go:64] FLAG: --max-pods="110" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656481 4982 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656492 4982 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656501 4982 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656511 4982 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656520 4982 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656528 4982 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656538 4982 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656558 4982 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656567 4982 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656576 4982 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656586 4982 flags.go:64] FLAG: --pod-cidr="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656594 4982 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656607 4982 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656616 4982 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656651 4982 flags.go:64] FLAG: --pods-per-core="0" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656660 4982 flags.go:64] FLAG: --port="10250" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656670 4982 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656679 4982 flags.go:64] FLAG: --provider-id="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656688 4982 flags.go:64] FLAG: --qos-reserved="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656697 4982 flags.go:64] FLAG: --read-only-port="10255" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656706 4982 flags.go:64] FLAG: --register-node="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656714 4982 flags.go:64] FLAG: --register-schedulable="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656723 4982 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656739 4982 flags.go:64] FLAG: --registry-burst="10" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656750 4982 flags.go:64] FLAG: --registry-qps="5" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656760 4982 flags.go:64] FLAG: --reserved-cpus="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656772 4982 flags.go:64] FLAG: --reserved-memory="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656783 4982 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656792 4982 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656801 4982 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656810 4982 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656819 4982 flags.go:64] FLAG: --runonce="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656827 4982 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656837 4982 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656847 4982 flags.go:64] FLAG: --seccomp-default="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656856 4982 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656864 4982 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656873 4982 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656884 4982 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656895 4982 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656906 4982 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656916 4982 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656925 4982 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656933 4982 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656943 4982 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656952 4982 flags.go:64] FLAG: --system-cgroups="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656961 4982 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656975 4982 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656983 4982 flags.go:64] FLAG: --tls-cert-file="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.656993 4982 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657005 4982 flags.go:64] FLAG: --tls-min-version="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657015 4982 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657025 4982 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657034 4982 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657043 4982 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657053 4982 flags.go:64] FLAG: --v="2" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657070 4982 flags.go:64] FLAG: --version="false" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657081 4982 flags.go:64] FLAG: --vmodule="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657091 4982 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657101 4982 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657307 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657316 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657325 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657337 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657347 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657357 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657365 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657373 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657384 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657394 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657405 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657415 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657425 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657433 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657442 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657451 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657459 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657470 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657479 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657487 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657496 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657504 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657512 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657521 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657531 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657540 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657550 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657558 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657567 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657575 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657584 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657592 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657599 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657607 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657616 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657650 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657660 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657692 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657701 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657709 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657717 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657725 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657733 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657740 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657748 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657756 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657763 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657771 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657778 4982 feature_gate.go:330] unrecognized feature gate: Example Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657786 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657794 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657801 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657809 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657817 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657824 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657832 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657842 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657851 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657860 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657868 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657876 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657884 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657892 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657900 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657907 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657916 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657924 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657932 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657939 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657947 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.657955 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.657968 4982 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.667915 4982 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.667946 4982 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668009 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668016 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668021 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668025 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668030 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668033 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668037 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668041 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668044 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668048 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668051 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668056 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668060 4982 feature_gate.go:330] unrecognized feature gate: Example Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668064 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668068 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668073 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668109 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668113 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668116 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668120 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668123 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668127 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668131 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668134 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668138 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668142 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668145 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668150 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668155 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668159 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668163 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668166 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668170 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668174 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668178 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668181 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668185 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668188 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668192 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668195 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668199 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668202 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668207 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668211 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668215 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668220 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668224 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668228 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668233 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668237 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668241 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668245 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668249 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668253 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668256 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668260 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668264 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668268 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668273 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668276 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668280 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668283 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668287 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668290 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668294 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668297 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668301 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668304 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668308 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668311 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668316 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.668322 4982 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668434 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668439 4982 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668443 4982 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668446 4982 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668450 4982 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668469 4982 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668473 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668484 4982 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668491 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668500 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668510 4982 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668514 4982 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668518 4982 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668522 4982 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668526 4982 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668530 4982 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668533 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668537 4982 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668541 4982 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668544 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668547 4982 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668551 4982 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668554 4982 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668558 4982 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668562 4982 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668565 4982 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668570 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668574 4982 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668577 4982 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668581 4982 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668584 4982 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668588 4982 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668591 4982 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668596 4982 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668601 4982 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668606 4982 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668609 4982 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668613 4982 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668617 4982 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668646 4982 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668655 4982 feature_gate.go:330] unrecognized feature gate: Example Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668658 4982 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668666 4982 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668670 4982 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668673 4982 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668681 4982 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668684 4982 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668688 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668691 4982 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668695 4982 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668699 4982 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668704 4982 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668708 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668711 4982 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668715 4982 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668719 4982 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668722 4982 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668726 4982 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668730 4982 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668734 4982 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668737 4982 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668741 4982 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668744 4982 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668748 4982 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668751 4982 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668755 4982 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668758 4982 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668761 4982 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668765 4982 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668769 4982 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.668774 4982 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.668832 4982 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.668952 4982 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.671293 4982 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.671363 4982 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.671844 4982 server.go:997] "Starting client certificate rotation" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.671869 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.672066 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-14 19:04:40.025705158 +0000 UTC Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.672246 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.680817 4982 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.683101 4982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.683276 4982 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.693075 4982 log.go:25] "Validated CRI v1 runtime API" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.711876 4982 log.go:25] "Validated CRI v1 image API" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.713863 4982 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.716776 4982 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-07-34-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.716849 4982 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.748494 4982 manager.go:217] Machine: {Timestamp:2026-01-23 07:55:25.745002402 +0000 UTC m=+0.225365828 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:9454035b-7737-4bff-9cbb-fb77ac30a74d BootID:da89adcc-d5b1-4734-a8fa-b65ffc69cee5 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2f:22:de Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:2f:22:de Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0a:37:2a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:78:e4:36 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1a:ad:6b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:dc:49:53 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:8c:87:1e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e6:51:e6:72:e2:d4 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:9f:b5:f5:a9:db Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.749033 4982 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.749341 4982 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.750363 4982 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.750714 4982 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.750778 4982 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.751149 4982 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.751168 4982 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.751504 4982 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.751553 4982 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.751817 4982 state_mem.go:36] "Initialized new in-memory state store" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.751948 4982 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.753037 4982 kubelet.go:418] "Attempting to sync node with API server" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.753071 4982 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.753148 4982 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.753169 4982 kubelet.go:324] "Adding apiserver pod source" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.753193 4982 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.755307 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.755516 4982 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.755643 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.755802 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.755933 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.755978 4982 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.756723 4982 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757162 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757182 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757189 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757196 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757206 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757212 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757218 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757228 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757235 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757241 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757278 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757285 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757467 4982 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.757792 4982 server.go:1280] "Started kubelet" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.758270 4982 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.758987 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:25 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.759521 4982 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.763338 4982 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.764989 4982 server.go:460] "Adding debug handlers to kubelet server" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.765822 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.765864 4982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.765966 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:05:07.187772589 +0000 UTC Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.766312 4982 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.766681 4982 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.766745 4982 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.766928 4982 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.767257 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="200ms" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.764312 4982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.168:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d4d08166ca7f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 07:55:25.757769721 +0000 UTC m=+0.238133107,LastTimestamp:2026-01-23 07:55:25.757769721 +0000 UTC m=+0.238133107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.770611 4982 factory.go:55] Registering systemd factory Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.770667 4982 factory.go:221] Registration of the systemd container factory successfully Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.771351 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.771451 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.771810 4982 factory.go:153] Registering CRI-O factory Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.771837 4982 factory.go:221] Registration of the crio container factory successfully Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.771909 4982 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.771931 4982 factory.go:103] Registering Raw factory Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.771975 4982 manager.go:1196] Started watching for new ooms in manager Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.772736 4982 manager.go:319] Starting recovery of all containers Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791651 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791734 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791759 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791780 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791800 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791821 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791850 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791868 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791915 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791935 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791954 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.791982 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792006 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792028 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792047 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792081 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792225 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792254 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792321 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792341 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792360 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792378 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792397 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792415 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792493 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792515 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792551 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792573 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792721 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792849 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792878 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792901 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792956 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792978 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.792996 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793018 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793111 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793139 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793167 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793186 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793244 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793345 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793365 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793386 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793406 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793468 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793487 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793507 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793606 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793712 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793735 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793754 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793785 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793809 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793832 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793855 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793948 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.793976 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794002 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794021 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794082 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794102 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794123 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794142 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794165 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794186 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794204 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794222 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794242 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794262 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794283 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794302 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794322 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794342 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794362 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794383 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794404 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794460 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794480 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794502 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794558 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794578 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794596 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794615 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794664 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794685 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794703 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794724 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794743 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794762 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794782 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794804 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794824 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794842 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794862 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794880 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794900 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794920 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794940 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794959 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794979 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.794998 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795017 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795040 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795067 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795088 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795112 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795133 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795157 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795180 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795199 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795220 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795243 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795264 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795293 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795313 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795333 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795352 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795373 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795393 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795411 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795428 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795449 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795467 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795485 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795503 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795522 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795541 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795562 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795581 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795600 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795618 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795665 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795684 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795703 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795731 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795751 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795771 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.795791 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796226 4982 manager.go:324] Recovery completed Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796776 4982 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796835 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796859 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796882 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796906 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796935 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796962 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.796988 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797016 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797042 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797068 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797095 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797121 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797148 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797173 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797198 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797225 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797252 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797271 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797292 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797311 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797342 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797360 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797381 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797401 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797420 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797440 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797460 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797511 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797538 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797565 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797585 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797608 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797662 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797682 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797702 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797723 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797744 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797762 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797782 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797802 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797824 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797844 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797864 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797886 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797907 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.797926 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798359 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798396 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798422 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798457 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798478 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798517 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798548 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798574 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798613 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798714 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798752 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798774 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.798794 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.799089 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800412 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800512 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800539 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800566 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800599 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800621 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800669 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800698 4982 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800717 4982 reconstruct.go:97] "Volume reconstruction finished" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.800732 4982 reconciler.go:26] "Reconciler: start to sync state" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.811721 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.816081 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.816134 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.816146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.818360 4982 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.818403 4982 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.818459 4982 state_mem.go:36] "Initialized new in-memory state store" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.823229 4982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.825977 4982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.826007 4982 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.826027 4982 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.826062 4982 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 07:55:25 crc kubenswrapper[4982]: W0123 07:55:25.827061 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.827121 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.828660 4982 policy_none.go:49] "None policy: Start" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.829467 4982 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.829501 4982 state_mem.go:35] "Initializing new in-memory state store" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.866663 4982 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897079 4982 manager.go:334] "Starting Device Plugin manager" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897375 4982 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897393 4982 server.go:79] "Starting device plugin registration server" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897747 4982 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897760 4982 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897914 4982 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897983 4982 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.897990 4982 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.904719 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.926868 4982 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.927000 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.928050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.928085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.928094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.928243 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.928418 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.928447 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.928993 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929040 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929111 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929219 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929345 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.929388 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930014 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930043 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930057 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930067 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930165 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930192 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930464 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930532 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.930979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931012 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931024 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931118 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931257 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931285 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931585 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931643 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.931653 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932153 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932178 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932192 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932415 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.932448 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.933518 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.933550 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.933563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.967892 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="400ms" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.997855 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.998986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.999022 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.999033 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:25 crc kubenswrapper[4982]: I0123 07:55:25.999058 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 07:55:25 crc kubenswrapper[4982]: E0123 07:55:25.999587 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.168:6443: connect: connection refused" node="crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002764 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002813 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002846 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002898 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002931 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002952 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002975 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.002993 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.003042 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.003092 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.003121 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.003144 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.003163 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.003206 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.003232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104395 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104449 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104473 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104495 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104514 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104530 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104564 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104580 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104600 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104615 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104634 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104685 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104644 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104701 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104726 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104714 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104735 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104686 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104770 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104869 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104885 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104883 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104914 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104922 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104985 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104999 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.104949 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.105050 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.105129 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.105040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.200605 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.202037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.202123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.202149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.202197 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 07:55:26 crc kubenswrapper[4982]: E0123 07:55:26.203043 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.168:6443: connect: connection refused" node="crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.251965 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.268840 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.277297 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: W0123 07:55:26.283867 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ef4c93ebc0263f2c785638dce8edc473c9b23821b06681a95813e6bade30cf47 WatchSource:0}: Error finding container ef4c93ebc0263f2c785638dce8edc473c9b23821b06681a95813e6bade30cf47: Status 404 returned error can't find the container with id ef4c93ebc0263f2c785638dce8edc473c9b23821b06681a95813e6bade30cf47 Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.290832 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: W0123 07:55:26.296071 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-74cb5af94d13f47a33df43e1d4362dbc30e1e84afb2a48e6dee2ea31307c47bc WatchSource:0}: Error finding container 74cb5af94d13f47a33df43e1d4362dbc30e1e84afb2a48e6dee2ea31307c47bc: Status 404 returned error can't find the container with id 74cb5af94d13f47a33df43e1d4362dbc30e1e84afb2a48e6dee2ea31307c47bc Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.296122 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:26 crc kubenswrapper[4982]: W0123 07:55:26.316278 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-bb2d3c3bd88a69fa978677070a0cc7e06a9af227d520ebbf3fe8bdc27cb24b5b WatchSource:0}: Error finding container bb2d3c3bd88a69fa978677070a0cc7e06a9af227d520ebbf3fe8bdc27cb24b5b: Status 404 returned error can't find the container with id bb2d3c3bd88a69fa978677070a0cc7e06a9af227d520ebbf3fe8bdc27cb24b5b Jan 23 07:55:26 crc kubenswrapper[4982]: W0123 07:55:26.317066 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a7fc9627b259ae133dbb1fffea886caa5e79b975f64b25f50e5def216fc60cf4 WatchSource:0}: Error finding container a7fc9627b259ae133dbb1fffea886caa5e79b975f64b25f50e5def216fc60cf4: Status 404 returned error can't find the container with id a7fc9627b259ae133dbb1fffea886caa5e79b975f64b25f50e5def216fc60cf4 Jan 23 07:55:26 crc kubenswrapper[4982]: E0123 07:55:26.368937 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="800ms" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.603231 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.605949 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.606025 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.606045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.606089 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 07:55:26 crc kubenswrapper[4982]: E0123 07:55:26.606767 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.168:6443: connect: connection refused" node="crc" Jan 23 07:55:26 crc kubenswrapper[4982]: W0123 07:55:26.705206 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:26 crc kubenswrapper[4982]: E0123 07:55:26.705300 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.761498 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.766568 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:17:47.273843576 +0000 UTC Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.831289 4982 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4" exitCode=0 Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.831363 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.831491 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ef4c93ebc0263f2c785638dce8edc473c9b23821b06681a95813e6bade30cf47"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.831595 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.832495 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.832528 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.832536 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.832986 4982 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494" exitCode=0 Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.833043 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.833069 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a7fc9627b259ae133dbb1fffea886caa5e79b975f64b25f50e5def216fc60cf4"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.833149 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.834083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.834107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.834115 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.836087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.836115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb2d3c3bd88a69fa978677070a0cc7e06a9af227d520ebbf3fe8bdc27cb24b5b"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.837913 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109" exitCode=0 Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.837955 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.837970 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a56307af04026e232a06f2b6389e638f0c1ebcefeb41a07dc60f14af61f66152"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.838044 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.839135 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.839160 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.839171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.839652 4982 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a" exitCode=0 Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.839686 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.839707 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"74cb5af94d13f47a33df43e1d4362dbc30e1e84afb2a48e6dee2ea31307c47bc"} Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.839820 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.840496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.840519 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.840527 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.844486 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.845353 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.845415 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:26 crc kubenswrapper[4982]: I0123 07:55:26.845435 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:26 crc kubenswrapper[4982]: W0123 07:55:26.845436 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:26 crc kubenswrapper[4982]: E0123 07:55:26.845495 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:26 crc kubenswrapper[4982]: W0123 07:55:26.856494 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:26 crc kubenswrapper[4982]: E0123 07:55:26.856580 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:27 crc kubenswrapper[4982]: E0123 07:55:27.170105 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="1.6s" Jan 23 07:55:27 crc kubenswrapper[4982]: W0123 07:55:27.177203 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.168:6443: connect: connection refused Jan 23 07:55:27 crc kubenswrapper[4982]: E0123 07:55:27.177278 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.168:6443: connect: connection refused" logger="UnhandledError" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.407698 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.408843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.408877 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.408889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.408914 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 07:55:27 crc kubenswrapper[4982]: E0123 07:55:27.409365 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.168:6443: connect: connection refused" node="crc" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.763831 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.766937 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:55:52.553657378 +0000 UTC Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.847905 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.847951 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.847966 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.852051 4982 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa" exitCode=0 Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.852113 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.852227 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.853200 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.853228 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.853238 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.853741 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"23ea6ad6990a1f43f3c8f6c761c6c38dc43ecba1012df4a9af76c765f7a60c9d"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.853847 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.854723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.854760 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.854778 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.856394 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.856436 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.856455 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.856552 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.860870 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.860908 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.860921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.862903 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.862945 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.862962 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.862961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc"} Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.863712 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.863734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:27 crc kubenswrapper[4982]: I0123 07:55:27.863743 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.767679 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:23:12.850544233 +0000 UTC Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.869594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6"} Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.869685 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79"} Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.869704 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.871248 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.871358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.871443 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.872354 4982 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4" exitCode=0 Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.872418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4"} Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.872650 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.872724 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.873861 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.873914 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.873932 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.874524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.874656 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:28 crc kubenswrapper[4982]: I0123 07:55:28.874734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.009682 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.010809 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.010846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.010861 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.010890 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.768180 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:16:52.104337696 +0000 UTC Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.877251 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b"} Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.877498 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8"} Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.877510 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a"} Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.877518 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3"} Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.877297 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.877552 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.878854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.879062 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:29 crc kubenswrapper[4982]: I0123 07:55:29.879074 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.769043 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:52:08.561855261 +0000 UTC Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.883449 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6"} Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.883561 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.884566 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.884609 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.884646 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.949261 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.949586 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.951033 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.951097 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:30 crc kubenswrapper[4982]: I0123 07:55:30.951116 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.770187 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:28:59.690918714 +0000 UTC Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.886212 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.887137 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.887172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.887185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.903580 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.903818 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.903921 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.904822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.904921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:31 crc kubenswrapper[4982]: I0123 07:55:31.904985 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:32 crc kubenswrapper[4982]: I0123 07:55:32.439607 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 07:55:32 crc kubenswrapper[4982]: I0123 07:55:32.770828 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:27:00.172674869 +0000 UTC Jan 23 07:55:32 crc kubenswrapper[4982]: I0123 07:55:32.890094 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:32 crc kubenswrapper[4982]: I0123 07:55:32.892090 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:32 crc kubenswrapper[4982]: I0123 07:55:32.892155 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:32 crc kubenswrapper[4982]: I0123 07:55:32.892173 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.120181 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.120353 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.121560 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.121606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.121617 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.157225 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.164744 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.626368 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.626503 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.626541 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.627653 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.627684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.627694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.680830 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.771080 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:44:56.033406453 +0000 UTC Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.892653 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.892696 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.893660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.893697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.893709 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.893720 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.893752 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:33 crc kubenswrapper[4982]: I0123 07:55:33.893767 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:34 crc kubenswrapper[4982]: I0123 07:55:34.771560 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:36:20.791190682 +0000 UTC Jan 23 07:55:34 crc kubenswrapper[4982]: I0123 07:55:34.895137 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:55:34 crc kubenswrapper[4982]: I0123 07:55:34.895195 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:34 crc kubenswrapper[4982]: I0123 07:55:34.896299 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:34 crc kubenswrapper[4982]: I0123 07:55:34.896505 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:34 crc kubenswrapper[4982]: I0123 07:55:34.896612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:35 crc kubenswrapper[4982]: I0123 07:55:35.772692 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:36:55.251258183 +0000 UTC Jan 23 07:55:35 crc kubenswrapper[4982]: E0123 07:55:35.904877 4982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 07:55:35 crc kubenswrapper[4982]: I0123 07:55:35.944005 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:35 crc kubenswrapper[4982]: I0123 07:55:35.944252 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:35 crc kubenswrapper[4982]: I0123 07:55:35.945559 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:35 crc kubenswrapper[4982]: I0123 07:55:35.945806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:35 crc kubenswrapper[4982]: I0123 07:55:35.945955 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.033577 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.034068 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.034399 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.036260 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.036293 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.036305 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.040954 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.243249 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.774455 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:33:31.681355852 +0000 UTC Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.900914 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.902800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.902860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:36 crc kubenswrapper[4982]: I0123 07:55:36.902888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:37 crc kubenswrapper[4982]: I0123 07:55:37.762620 4982 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 07:55:37 crc kubenswrapper[4982]: E0123 07:55:37.765542 4982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 07:55:37 crc kubenswrapper[4982]: I0123 07:55:37.775299 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:36:16.064112721 +0000 UTC Jan 23 07:55:37 crc kubenswrapper[4982]: I0123 07:55:37.903668 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:37 crc kubenswrapper[4982]: I0123 07:55:37.904710 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:37 crc kubenswrapper[4982]: I0123 07:55:37.904754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:37 crc kubenswrapper[4982]: I0123 07:55:37.904764 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:38 crc kubenswrapper[4982]: W0123 07:55:38.609863 4982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 07:55:38 crc kubenswrapper[4982]: I0123 07:55:38.610003 4982 trace.go:236] Trace[1287665596]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 07:55:28.608) (total time: 10001ms): Jan 23 07:55:38 crc kubenswrapper[4982]: Trace[1287665596]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:55:38.609) Jan 23 07:55:38 crc kubenswrapper[4982]: Trace[1287665596]: [10.001862599s] [10.001862599s] END Jan 23 07:55:38 crc kubenswrapper[4982]: E0123 07:55:38.610040 4982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 07:55:38 crc kubenswrapper[4982]: I0123 07:55:38.726408 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 07:55:38 crc kubenswrapper[4982]: I0123 07:55:38.726537 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 07:55:38 crc kubenswrapper[4982]: I0123 07:55:38.739612 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 07:55:38 crc kubenswrapper[4982]: I0123 07:55:38.739763 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 07:55:38 crc kubenswrapper[4982]: I0123 07:55:38.775521 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:25:02.676865184 +0000 UTC Jan 23 07:55:39 crc kubenswrapper[4982]: I0123 07:55:39.244605 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:55:39 crc kubenswrapper[4982]: I0123 07:55:39.244705 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 07:55:39 crc kubenswrapper[4982]: I0123 07:55:39.776080 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:26:13.475913072 +0000 UTC Jan 23 07:55:40 crc kubenswrapper[4982]: I0123 07:55:40.777167 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:23:12.589229115 +0000 UTC Jan 23 07:55:41 crc kubenswrapper[4982]: I0123 07:55:41.778106 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:11:44.861020823 +0000 UTC Jan 23 07:55:42 crc kubenswrapper[4982]: I0123 07:55:42.147986 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 07:55:42 crc kubenswrapper[4982]: I0123 07:55:42.166561 4982 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 07:55:42 crc kubenswrapper[4982]: I0123 07:55:42.779259 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:49:23.735015372 +0000 UTC Jan 23 07:55:42 crc kubenswrapper[4982]: I0123 07:55:42.805709 4982 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 07:55:42 crc kubenswrapper[4982]: I0123 07:55:42.869291 4982 csr.go:261] certificate signing request csr-vcpzq is approved, waiting to be issued Jan 23 07:55:42 crc kubenswrapper[4982]: I0123 07:55:42.881952 4982 csr.go:257] certificate signing request csr-vcpzq is issued Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.635244 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.635558 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.640233 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.640314 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.640346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.645795 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.714446 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.714767 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.716588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.716707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.716732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.724650 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.727183 4982 trace.go:236] Trace[259064015]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 07:55:29.920) (total time: 13806ms): Jan 23 07:55:43 crc kubenswrapper[4982]: Trace[259064015]: ---"Objects listed" error: 13806ms (07:55:43.727) Jan 23 07:55:43 crc kubenswrapper[4982]: Trace[259064015]: [13.806605612s] [13.806605612s] END Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.727225 4982 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.731528 4982 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.731776 4982 trace.go:236] Trace[1840538450]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 07:55:29.294) (total time: 14437ms): Jan 23 07:55:43 crc kubenswrapper[4982]: Trace[1840538450]: ---"Objects listed" error: 14436ms (07:55:43.731) Jan 23 07:55:43 crc kubenswrapper[4982]: Trace[1840538450]: [14.437134601s] [14.437134601s] END Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.731816 4982 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.734160 4982 trace.go:236] Trace[708193962]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 07:55:29.172) (total time: 14561ms): Jan 23 07:55:43 crc kubenswrapper[4982]: Trace[708193962]: ---"Objects listed" error: 14561ms (07:55:43.734) Jan 23 07:55:43 crc kubenswrapper[4982]: Trace[708193962]: [14.561603398s] [14.561603398s] END Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.734197 4982 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.734610 4982 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.748081 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.761501 4982 apiserver.go:52] "Watching apiserver" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.764694 4982 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.765086 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.765570 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.765595 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.765717 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.765806 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.766078 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.766357 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.766485 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.766769 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.766867 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.767829 4982 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.769013 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.769543 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.769670 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.771312 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.772851 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.772898 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.773041 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.773169 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.773172 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.779552 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:29:02.45973995 +0000 UTC Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.832933 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.832977 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.832999 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.833018 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.833034 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.833050 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.833065 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.834466 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.835432 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.835993 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836029 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836311 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836505 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836565 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836847 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836918 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836951 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.836977 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837003 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837031 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837058 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837126 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837157 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837190 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837224 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837245 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837273 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837303 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837395 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837451 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837480 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837506 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837532 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837556 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837606 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837651 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837681 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837706 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837735 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837762 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837793 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837818 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837848 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837875 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837905 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837932 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837961 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.837989 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838012 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838041 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838068 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838091 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838118 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838143 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838200 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838227 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838256 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838282 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838310 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838337 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838360 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838386 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838414 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838440 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838461 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838488 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838515 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838539 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838598 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838644 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838672 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838700 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838731 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838755 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838785 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838821 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838847 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838875 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838908 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838933 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838962 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.838993 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839020 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839043 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839072 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839102 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839125 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839153 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839183 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839209 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839237 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839263 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839291 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839317 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839346 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839372 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839425 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839451 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839473 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839498 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839524 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839552 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839577 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839603 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839650 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839675 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839701 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839724 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839750 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839775 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839807 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839839 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839861 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839892 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839928 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839951 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.839978 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840006 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840036 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840065 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840095 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840126 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840152 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840179 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840205 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840233 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840258 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840291 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840316 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840341 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840369 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840396 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840451 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840479 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840509 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840534 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840563 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840567 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840597 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840791 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.840980 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.841139 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.841287 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.841258 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.841419 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.841565 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.841616 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.841894 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842211 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842336 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842393 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842415 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842476 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842506 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842537 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842564 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842592 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842616 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842663 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842718 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842739 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842750 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842781 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842813 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842844 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842876 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842908 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.842973 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843005 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843036 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843114 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843146 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843161 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843177 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843215 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843249 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843286 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843313 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843342 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843371 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843398 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843452 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843481 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843506 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843541 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843568 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843598 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843604 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843641 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843671 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843702 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843728 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843756 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843782 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843809 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843839 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843869 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843872 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843903 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843936 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843965 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.843991 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844016 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844043 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844045 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844092 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844125 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844153 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844174 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844197 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844253 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844276 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844280 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844349 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844378 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844400 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844424 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844453 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844478 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844502 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844519 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844517 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844540 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844564 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844589 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844608 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844646 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844742 4982 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844757 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844767 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844778 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844790 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844803 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844815 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844826 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844839 4982 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844852 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844861 4982 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844873 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844885 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844895 4982 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844905 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844909 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844916 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844972 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.844990 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845005 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845025 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845041 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845057 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845071 4982 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845091 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845107 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845053 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845416 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.845780 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.846264 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.846998 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.847385 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.848406 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.848762 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.848990 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.849059 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.849360 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.849681 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.849782 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.850753 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.854777 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.870546 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.870539 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.870778 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.870963 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871125 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871257 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871301 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871505 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871575 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871686 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871866 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.871880 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.872246 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.872504 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.872572 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.872739 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.872929 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.872981 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.873185 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.873195 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.873414 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.873466 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.873640 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.873786 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874033 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874114 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874314 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874322 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874482 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874586 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874585 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874728 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874848 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.874990 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.875121 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.875132 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.875211 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.875588 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.875808 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.875835 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.875947 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876008 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876115 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876254 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876391 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876470 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876521 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876538 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876510 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876790 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876834 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.877072 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.876953 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.877217 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.877279 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.877354 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.877611 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.877920 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.878397 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.878605 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.879981 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.880334 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.880796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.881036 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.882589 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.882782 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.882905 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.882937 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.882971 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 07:50:42 +0000 UTC, rotation deadline is 2026-11-14 19:54:20.705209894 +0000 UTC Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.883037 4982 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7091h58m36.822175652s for next certificate rotation Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.883314 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.883469 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.883796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.884191 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.884419 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.884689 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.884765 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.884877 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.884946 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.885198 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.885351 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.885731 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.885910 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.886214 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.886221 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.886538 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.886552 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.886914 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887001 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887008 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887064 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887364 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887491 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887598 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887821 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.887919 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.888012 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.888034 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.888206 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.888402 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.888965 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.889178 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.889187 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.889374 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.889454 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.889607 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.890060 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.890831 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.890958 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.891159 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.891214 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.891663 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.891759 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892116 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892198 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892458 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892530 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892566 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892709 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892788 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.892983 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.893410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.894069 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.893878 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.894142 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.894351 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.894435 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.894529 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.894981 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.895166 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.895431 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.895476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.896072 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.896168 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.896507 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.896541 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.896559 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.896590 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.896706 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.896959 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897015 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897135 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897197 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897231 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897242 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897401 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.897645 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.897731 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:44.397706556 +0000 UTC m=+18.878070012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897778 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897805 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897813 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.897836 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897880 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.898015 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:44.397994484 +0000 UTC m=+18.878357950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.898061 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:55:44.398044875 +0000 UTC m=+18.878408261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.897416 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.898091 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:44.398083566 +0000 UTC m=+18.878446952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.898862 4982 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.899722 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.902069 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.917685 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.917717 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.917732 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:43 crc kubenswrapper[4982]: E0123 07:55:43.917909 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:44.417859951 +0000 UTC m=+18.898223337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.917991 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.918531 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.918646 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.923150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.924262 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.926620 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.927594 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.930578 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.932272 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.935139 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59884->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.935182 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59884->192.168.126.11:17697: read: connection reset by peer" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.935418 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.935446 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.936355 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.939099 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.940462 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.943232 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.945770 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.945838 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.945940 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.945952 4982 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.945962 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.945971 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946003 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946012 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946020 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946028 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946037 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946045 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946073 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946082 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946091 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946102 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946112 4982 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946120 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946128 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946157 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946166 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946174 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946182 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946191 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946199 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946208 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946235 4982 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946245 4982 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946253 4982 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946260 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946268 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946276 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946284 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946311 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946320 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946329 4982 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946338 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946346 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946354 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946362 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946392 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946401 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946410 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946419 4982 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946428 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946437 4982 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946445 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946492 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946499 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946507 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946516 4982 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946525 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946552 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946560 4982 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946568 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946576 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946584 4982 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946592 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946601 4982 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946642 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946651 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946659 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946667 4982 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946675 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946683 4982 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946693 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946729 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946743 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.946851 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947046 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947066 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947162 4982 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947180 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947195 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947208 4982 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947220 4982 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947271 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947285 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947299 4982 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947313 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947325 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947340 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947352 4982 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947363 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947375 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947387 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947398 4982 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947411 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947422 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947434 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947446 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947458 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947472 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947484 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947496 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947508 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947521 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947536 4982 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947548 4982 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947560 4982 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947572 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947584 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947600 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947612 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947640 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947655 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947667 4982 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947680 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947695 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947708 4982 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947721 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947733 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947745 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947758 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947770 4982 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947782 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947795 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947807 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947819 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947833 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947845 4982 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947858 4982 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947870 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947883 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947897 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947911 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947923 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947935 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947948 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947960 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947973 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.947986 4982 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948000 4982 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948013 4982 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948026 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948039 4982 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948052 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948064 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948077 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948058 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948091 4982 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948199 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948214 4982 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948225 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948237 4982 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948247 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948257 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948268 4982 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948282 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948294 4982 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948307 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948318 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948329 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948338 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948349 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948359 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948371 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948381 4982 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948392 4982 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948401 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948411 4982 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948421 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948431 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948443 4982 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948459 4982 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948469 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948480 4982 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948489 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948498 4982 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948507 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.948518 4982 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.952263 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.965213 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.968496 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.980208 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.989209 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.991390 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 07:55:43 crc kubenswrapper[4982]: I0123 07:55:43.992260 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.049162 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.049228 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.096903 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 07:55:44 crc kubenswrapper[4982]: W0123 07:55:44.112462 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-448049e18dbe16a5ba4723e2f43f318cbe1fbb86c9acaf81572ff4a130f9e7f5 WatchSource:0}: Error finding container 448049e18dbe16a5ba4723e2f43f318cbe1fbb86c9acaf81572ff4a130f9e7f5: Status 404 returned error can't find the container with id 448049e18dbe16a5ba4723e2f43f318cbe1fbb86c9acaf81572ff4a130f9e7f5 Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.115493 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.123158 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 07:55:44 crc kubenswrapper[4982]: W0123 07:55:44.141844 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6517c44686cdc6287c526392d655808da3c86a339b57a96effd28629d5480537 WatchSource:0}: Error finding container 6517c44686cdc6287c526392d655808da3c86a339b57a96effd28629d5480537: Status 404 returned error can't find the container with id 6517c44686cdc6287c526392d655808da3c86a339b57a96effd28629d5480537 Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.150697 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.150759 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.460287 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.460754 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:55:45.460714744 +0000 UTC m=+19.941078130 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.464154 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.464230 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.464277 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.464319 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464470 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464498 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464512 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464565 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:45.464550559 +0000 UTC m=+19.944913945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464663 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464663 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464727 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464734 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464758 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.464692 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:45.464681513 +0000 UTC m=+19.945044889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.465017 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:45.4649396 +0000 UTC m=+19.945302986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:44 crc kubenswrapper[4982]: E0123 07:55:44.465054 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:45.465039603 +0000 UTC m=+19.945402979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.779663 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:59:32.772267566 +0000 UTC Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.923970 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f"} Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.924022 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717"} Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.924034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6517c44686cdc6287c526392d655808da3c86a339b57a96effd28629d5480537"} Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.925387 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fb7bb6d2eac79faa01d1317db9c54553f8f4c89bbfea7c54adb74edf4658a633"} Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.926442 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37"} Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.926465 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"448049e18dbe16a5ba4723e2f43f318cbe1fbb86c9acaf81572ff4a130f9e7f5"} Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.928538 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.930184 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6" exitCode=255 Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.930348 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6"} Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.930812 4982 scope.go:117] "RemoveContainer" containerID="89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6" Jan 23 07:55:44 crc kubenswrapper[4982]: I0123 07:55:44.961018 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:44Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.005344 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.028191 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.081009 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.122575 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.142925 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.159668 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.169652 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.181946 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.197530 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.213537 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.254846 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.273771 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.281442 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4j96f"] Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.281854 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.284110 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.284850 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.285165 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.292881 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.305762 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.330171 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.351593 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.371580 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.371947 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2-hosts-file\") pod \"node-resolver-4j96f\" (UID: \"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\") " pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.371981 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zjb\" (UniqueName: \"kubernetes.io/projected/c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2-kube-api-access-22zjb\") pod \"node-resolver-4j96f\" (UID: \"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\") " pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.389548 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.424385 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.447724 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.464773 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.472929 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.473020 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.473052 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.473071 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zjb\" (UniqueName: \"kubernetes.io/projected/c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2-kube-api-access-22zjb\") pod \"node-resolver-4j96f\" (UID: \"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\") " pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473182 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:55:47.473135413 +0000 UTC m=+21.953498809 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.473295 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.473360 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.473402 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2-hosts-file\") pod \"node-resolver-4j96f\" (UID: \"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\") " pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473413 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473436 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473449 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473497 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:47.473480732 +0000 UTC m=+21.953844118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.473524 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2-hosts-file\") pod \"node-resolver-4j96f\" (UID: \"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\") " pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473706 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473728 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473746 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473781 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473801 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:47.473787641 +0000 UTC m=+21.954151037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473850 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:47.473839472 +0000 UTC m=+21.954202878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473876 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.473941 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:47.473934005 +0000 UTC m=+21.954297391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.481412 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.489924 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zjb\" (UniqueName: \"kubernetes.io/projected/c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2-kube-api-access-22zjb\") pod \"node-resolver-4j96f\" (UID: \"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\") " pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.498584 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.518611 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.594005 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4j96f" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.672865 4982 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 07:55:45 crc kubenswrapper[4982]: W0123 07:55:45.673443 4982 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 23 07:55:45 crc kubenswrapper[4982]: W0123 07:55:45.673490 4982 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 07:55:45 crc kubenswrapper[4982]: W0123 07:55:45.673492 4982 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.710067 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ccl65"] Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.710664 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2tlww"] Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.710947 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.710989 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.718949 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719309 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gfvtq"] Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719500 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719648 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719719 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.718995 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.718995 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719888 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719031 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719930 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719973 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.719046 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.743890 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.743899 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.757901 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779045 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-system-cni-dir\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779091 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779113 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cni-binary-copy\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779137 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-os-release\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779158 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5bx\" (UniqueName: \"kubernetes.io/projected/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-kube-api-access-nx5bx\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779200 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cnibin\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779229 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.779822 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:51:39.790614615 +0000 UTC Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.803117 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.826438 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.826503 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.826537 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.826591 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.826707 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:45 crc kubenswrapper[4982]: E0123 07:55:45.826770 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.831028 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.831983 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.833099 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.833398 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.834154 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.835455 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.836116 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.836829 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.842030 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.842841 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.844026 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.844645 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.845769 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.846448 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.847734 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.848357 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.849328 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.850079 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.850567 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.851722 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.852356 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.852921 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.853748 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.858043 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.858823 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.860612 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.861407 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.864737 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.866698 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.867252 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.868380 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.868904 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.870212 4982 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.874815 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.870324 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879161 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879686 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-os-release\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879737 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-kubelet\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879766 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f08e29d-accb-44b3-b466-5edf48e58f17-mcd-auth-proxy-config\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879793 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-cni-bin\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879820 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-hostroot\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879852 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879857 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-k8s-cni-cncf-io\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879882 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksvln\" (UniqueName: \"kubernetes.io/projected/a3e50013-d63d-4e43-924e-74237cd4a690-kube-api-access-ksvln\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879910 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879935 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-socket-dir-parent\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879969 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-system-cni-dir\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.879994 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f08e29d-accb-44b3-b466-5edf48e58f17-rootfs\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880017 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-multus-certs\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880040 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-cnibin\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880062 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-conf-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880087 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-cni-multus\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880112 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-cni-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880136 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5bx\" (UniqueName: \"kubernetes.io/projected/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-kube-api-access-nx5bx\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880173 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgx8c\" (UniqueName: \"kubernetes.io/projected/5f08e29d-accb-44b3-b466-5edf48e58f17-kube-api-access-qgx8c\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880199 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a3e50013-d63d-4e43-924e-74237cd4a690-cni-binary-copy\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880222 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a3e50013-d63d-4e43-924e-74237cd4a690-multus-daemon-config\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880246 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cnibin\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880297 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-etc-kubernetes\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880326 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880351 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-netns\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880375 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cni-binary-copy\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880400 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f08e29d-accb-44b3-b466-5edf48e58f17-proxy-tls\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880422 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-system-cni-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880444 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-os-release\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.880978 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.881057 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-os-release\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.881978 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.882053 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-system-cni-dir\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.882366 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cnibin\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.882762 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.882854 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.883350 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-cni-binary-copy\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.883548 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.885459 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.886127 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.887338 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.887925 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.888524 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.889657 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.891211 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.891771 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.893073 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.893658 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.894799 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.895289 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.896224 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.896748 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.897268 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.899001 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.899465 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.899867 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.904131 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5bx\" (UniqueName: \"kubernetes.io/projected/6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c-kube-api-access-nx5bx\") pod \"multus-additional-cni-plugins-ccl65\" (UID: \"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\") " pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.912141 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.929112 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.934935 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4j96f" event={"ID":"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2","Type":"ContainerStarted","Data":"5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765"} Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.934984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4j96f" event={"ID":"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2","Type":"ContainerStarted","Data":"df272f71559e9caf4c7a8cd36f30f1cef4c5990275565434ae8d0f194da0ea6d"} Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.937504 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.939500 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858"} Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.939762 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.947271 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.961909 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.980596 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981132 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-netns\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981170 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f08e29d-accb-44b3-b466-5edf48e58f17-proxy-tls\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981192 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-system-cni-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981208 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-os-release\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981226 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-kubelet\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981242 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-cni-bin\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981261 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-hostroot\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981277 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f08e29d-accb-44b3-b466-5edf48e58f17-mcd-auth-proxy-config\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981302 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-k8s-cni-cncf-io\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981347 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksvln\" (UniqueName: \"kubernetes.io/projected/a3e50013-d63d-4e43-924e-74237cd4a690-kube-api-access-ksvln\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-system-cni-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981363 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-os-release\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981362 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-socket-dir-parent\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981401 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-socket-dir-parent\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981430 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-hostroot\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981468 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-kubelet\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981458 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f08e29d-accb-44b3-b466-5edf48e58f17-rootfs\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981500 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-cni-bin\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981502 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-multus-certs\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981527 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-conf-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981531 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f08e29d-accb-44b3-b466-5edf48e58f17-rootfs\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981543 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-cnibin\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981562 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-multus-certs\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981588 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-conf-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981589 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-cni-multus\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981564 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-var-lib-cni-multus\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981615 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-k8s-cni-cncf-io\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981642 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-cni-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981672 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a3e50013-d63d-4e43-924e-74237cd4a690-cni-binary-copy\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981690 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a3e50013-d63d-4e43-924e-74237cd4a690-multus-daemon-config\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981709 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgx8c\" (UniqueName: \"kubernetes.io/projected/5f08e29d-accb-44b3-b466-5edf48e58f17-kube-api-access-qgx8c\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981736 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-etc-kubernetes\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981782 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-etc-kubernetes\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981820 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-cnibin\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.981971 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-multus-cni-dir\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.982102 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f08e29d-accb-44b3-b466-5edf48e58f17-mcd-auth-proxy-config\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.982195 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a3e50013-d63d-4e43-924e-74237cd4a690-host-run-netns\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.982852 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a3e50013-d63d-4e43-924e-74237cd4a690-multus-daemon-config\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.982858 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a3e50013-d63d-4e43-924e-74237cd4a690-cni-binary-copy\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.991833 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f08e29d-accb-44b3-b466-5edf48e58f17-proxy-tls\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.997710 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgx8c\" (UniqueName: \"kubernetes.io/projected/5f08e29d-accb-44b3-b466-5edf48e58f17-kube-api-access-qgx8c\") pod \"machine-config-daemon-2tlww\" (UID: \"5f08e29d-accb-44b3-b466-5edf48e58f17\") " pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:45 crc kubenswrapper[4982]: I0123 07:55:45.997790 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.007682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksvln\" (UniqueName: \"kubernetes.io/projected/a3e50013-d63d-4e43-924e-74237cd4a690-kube-api-access-ksvln\") pod \"multus-gfvtq\" (UID: \"a3e50013-d63d-4e43-924e-74237cd4a690\") " pod="openshift-multus/multus-gfvtq" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.024481 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.033372 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ccl65" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.039201 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.044184 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:55:46 crc kubenswrapper[4982]: W0123 07:55:46.047977 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f7f9a3e_f8e7_4da6_bf78_47c9895b7d7c.slice/crio-9630891fe6952d0fef28f931cfe0035e68979f0fe5ba4fe312b2815095005faa WatchSource:0}: Error finding container 9630891fe6952d0fef28f931cfe0035e68979f0fe5ba4fe312b2815095005faa: Status 404 returned error can't find the container with id 9630891fe6952d0fef28f931cfe0035e68979f0fe5ba4fe312b2815095005faa Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.049446 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gfvtq" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.052603 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.073657 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.088609 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.113250 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.134413 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.148231 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.154231 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwb28"] Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.156153 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.157974 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.158467 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.158679 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.158830 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.158693 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.159229 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.159552 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.178299 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.191064 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.204929 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.226356 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.248936 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.249832 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.253730 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.259065 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.267994 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.285952 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovn-node-metrics-cert\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286071 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-env-overrides\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286100 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286117 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-netns\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286151 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-bin\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286169 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-script-lib\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286196 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-systemd-units\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286213 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286231 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-netd\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286257 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-config\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-var-lib-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286308 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-etc-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286334 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286359 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-systemd\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286379 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-kubelet\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-slash\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286440 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrrv\" (UniqueName: \"kubernetes.io/projected/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-kube-api-access-khrrv\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286462 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-ovn\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286482 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-node-log\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.286515 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-log-socket\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.303931 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.318196 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.331381 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.341723 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.355571 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.375511 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.386861 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovn-node-metrics-cert\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.386907 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-env-overrides\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.386932 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.386950 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-netns\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.386974 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-bin\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.386994 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-script-lib\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387030 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-systemd-units\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387035 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387053 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387075 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-netd\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387088 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-bin\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387097 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-config\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387137 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-var-lib-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387167 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-etc-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387182 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387200 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-systemd\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387215 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-kubelet\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387202 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387248 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-slash\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-var-lib-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387296 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-etc-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387230 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-slash\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387324 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-openvswitch\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-systemd\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387366 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-kubelet\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387045 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-netns\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387376 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khrrv\" (UniqueName: \"kubernetes.io/projected/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-kube-api-access-khrrv\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387395 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387420 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-systemd-units\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387929 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-ovn\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.387977 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-node-log\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.388001 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-log-socket\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.388142 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-log-socket\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.388708 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-script-lib\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.388772 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-ovn\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.388826 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-node-log\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.388858 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-netd\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.389227 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-config\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.390969 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-env-overrides\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.395055 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovn-node-metrics-cert\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.408491 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.412723 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khrrv\" (UniqueName: \"kubernetes.io/projected/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-kube-api-access-khrrv\") pod \"ovnkube-node-pwb28\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.425289 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.443695 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.461837 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.473521 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.476885 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.497991 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: W0123 07:55:46.508305 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e15f1e6_f3ea_4b15_89c0_e867fd52646e.slice/crio-4dad7a794679558f392d5d0206e1b781f1460bf8d79c6f4259f83cc0ddd3d1fb WatchSource:0}: Error finding container 4dad7a794679558f392d5d0206e1b781f1460bf8d79c6f4259f83cc0ddd3d1fb: Status 404 returned error can't find the container with id 4dad7a794679558f392d5d0206e1b781f1460bf8d79c6f4259f83cc0ddd3d1fb Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.519771 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.533402 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.550083 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.565205 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.579807 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.601511 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.619313 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.633355 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.647738 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.657940 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.672247 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.690880 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.693284 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.706712 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.718823 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.729222 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.734089 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.753181 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.781527 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:44:44.544503696 +0000 UTC Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.801918 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.836859 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.878798 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.918180 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.935765 4982 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.938830 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.939018 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.939134 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.939416 4982 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.944073 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerStarted","Data":"574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.944165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerStarted","Data":"130bc42cc130893a8e043849538612c16a696c37775b297670aaff8598812ca0"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.946488 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c" containerID="99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d" exitCode=0 Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.946663 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerDied","Data":"99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.946760 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerStarted","Data":"9630891fe6952d0fef28f931cfe0035e68979f0fe5ba4fe312b2815095005faa"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.948828 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.950151 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"4dad7a794679558f392d5d0206e1b781f1460bf8d79c6f4259f83cc0ddd3d1fb"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.955715 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.955775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.955792 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"82c9cbdbad4e4c8cf1e3f8eb85b43b193be26943ad3da0f43871e6166b520b9d"} Jan 23 07:55:46 crc kubenswrapper[4982]: I0123 07:55:46.964590 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.007835 4982 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.008282 4982 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.010003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.010091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.010121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.010171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.010206 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.035665 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.046091 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.047032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.047105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.047121 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.047152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.047174 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.064437 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.068163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.068194 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.068205 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.068222 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.068231 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.086645 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.093043 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.094532 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.094588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.094603 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.094686 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.094703 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.108524 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.121250 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.121281 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.121289 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.121303 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.121312 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.128109 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.141561 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.141819 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.144091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.144128 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.144140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.144156 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.144167 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.179086 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.197455 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.246161 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.246370 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.246445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.246508 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.246578 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.247822 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.275426 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.313771 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.348538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.349081 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.349168 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.349256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.349327 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.356481 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.400061 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.434733 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.454915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.454958 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.454969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.454987 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.454999 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.481239 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.500925 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.501031 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.501066 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.501088 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.501112 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501219 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501232 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501243 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501281 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:51.501266481 +0000 UTC m=+25.981629867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501542 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:55:51.501534789 +0000 UTC m=+25.981898175 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501589 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501601 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501610 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501654 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:51.501645942 +0000 UTC m=+25.982009328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501704 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501725 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:51.501720034 +0000 UTC m=+25.982083420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.501908 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.502044 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:51.502002772 +0000 UTC m=+25.982366238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.529731 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.558318 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.558399 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.558423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.558454 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.558477 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.562090 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.594259 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.640847 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.661428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.661470 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.661479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.661495 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.661504 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.679235 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.718031 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.764671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.764713 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.764723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.764737 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.764749 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.782104 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:58:20.345881825 +0000 UTC Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.827208 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.827257 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.827405 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.827445 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.827527 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:47 crc kubenswrapper[4982]: E0123 07:55:47.827680 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.867831 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.867868 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.867877 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.867890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.867899 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.961894 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a" exitCode=0 Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.962011 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.964483 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c" containerID="8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4" exitCode=0 Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.964576 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerDied","Data":"8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.969754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.969805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.969825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.969844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.969859 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:47Z","lastTransitionTime":"2026-01-23T07:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.983381 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:47 crc kubenswrapper[4982]: I0123 07:55:47.998710 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:47Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.015075 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.031754 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.041890 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pqvl7"] Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.042418 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.045178 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.045729 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.046106 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.055092 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.074684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.074819 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.074918 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.075087 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.075120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.075135 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.102604 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.115356 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.131287 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.156768 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.177738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.177786 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.177798 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.177823 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.177838 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.197304 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.208013 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/658d846d-029f-4188-91c9-6f797e0ffa52-host\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.208249 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/658d846d-029f-4188-91c9-6f797e0ffa52-serviceca\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.208390 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krqrv\" (UniqueName: \"kubernetes.io/projected/658d846d-029f-4188-91c9-6f797e0ffa52-kube-api-access-krqrv\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.235880 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.281197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.281403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.281465 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.281525 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.281577 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.283188 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.309129 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/658d846d-029f-4188-91c9-6f797e0ffa52-serviceca\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.309182 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krqrv\" (UniqueName: \"kubernetes.io/projected/658d846d-029f-4188-91c9-6f797e0ffa52-kube-api-access-krqrv\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.309271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/658d846d-029f-4188-91c9-6f797e0ffa52-host\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.309338 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/658d846d-029f-4188-91c9-6f797e0ffa52-host\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.310038 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/658d846d-029f-4188-91c9-6f797e0ffa52-serviceca\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.317714 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.343096 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krqrv\" (UniqueName: \"kubernetes.io/projected/658d846d-029f-4188-91c9-6f797e0ffa52-kube-api-access-krqrv\") pod \"node-ca-pqvl7\" (UID: \"658d846d-029f-4188-91c9-6f797e0ffa52\") " pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.377919 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.383500 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.383540 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.383552 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.383571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.383590 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.400722 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pqvl7" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.420403 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: W0123 07:55:48.435361 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod658d846d_029f_4188_91c9_6f797e0ffa52.slice/crio-2975d6ffdb4e099491d0c54d94966ed020fa948a0c0081a976d8d145a3391276 WatchSource:0}: Error finding container 2975d6ffdb4e099491d0c54d94966ed020fa948a0c0081a976d8d145a3391276: Status 404 returned error can't find the container with id 2975d6ffdb4e099491d0c54d94966ed020fa948a0c0081a976d8d145a3391276 Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.466641 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.490946 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.491007 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.491023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.491047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.491062 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.500399 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.542788 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.592081 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.594351 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.594385 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.594397 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.594415 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.594424 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.621291 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.659699 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.695487 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.696962 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.696992 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.697004 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.697027 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.697039 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.738910 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.775839 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.783124 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:53:33.73923036 +0000 UTC Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.799003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.799035 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.799045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.799059 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.799068 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.818854 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.864953 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.900375 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.902322 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.902385 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.902404 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.902433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.902453 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:48Z","lastTransitionTime":"2026-01-23T07:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.945561 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.969275 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pqvl7" event={"ID":"658d846d-029f-4188-91c9-6f797e0ffa52","Type":"ContainerStarted","Data":"f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.969334 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pqvl7" event={"ID":"658d846d-029f-4188-91c9-6f797e0ffa52","Type":"ContainerStarted","Data":"2975d6ffdb4e099491d0c54d94966ed020fa948a0c0081a976d8d145a3391276"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.973316 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.973352 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.973366 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.973379 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.973393 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.973405 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.975645 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c" containerID="c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81" exitCode=0 Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.975683 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerDied","Data":"c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81"} Jan 23 07:55:48 crc kubenswrapper[4982]: I0123 07:55:48.985732 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.006134 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.006181 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.006193 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.006216 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.006231 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.015905 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.067010 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.096668 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.108963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.109041 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.109054 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.109075 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.109087 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.136755 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.174994 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.212053 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.212091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.212100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.212114 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.212124 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.227755 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.264543 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.295972 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.315203 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.315418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.315482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.315549 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.315605 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.339233 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.376877 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.418342 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.418404 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.418421 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.418768 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.418798 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.423680 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.462038 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.501072 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.521217 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.521261 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.521275 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.521293 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.521307 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.542007 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.578583 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.624282 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.624313 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.624324 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.624340 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.624350 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.727032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.727231 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.727320 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.727381 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.727435 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.784154 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:20:15.77043796 +0000 UTC Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.826544 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:49 crc kubenswrapper[4982]: E0123 07:55:49.826796 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.826581 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:49 crc kubenswrapper[4982]: E0123 07:55:49.826961 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.826544 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:49 crc kubenswrapper[4982]: E0123 07:55:49.827243 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.830514 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.830597 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.830684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.830754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.830820 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.934029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.934164 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.934221 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.934280 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.934377 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:49Z","lastTransitionTime":"2026-01-23T07:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.980999 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c" containerID="43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11" exitCode=0 Jan 23 07:55:49 crc kubenswrapper[4982]: I0123 07:55:49.981255 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerDied","Data":"43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.005706 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.021484 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.036542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.036576 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.036588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.036605 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.036638 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.045423 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.060675 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.079126 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.099753 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.118514 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.140507 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.141185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.141220 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.141230 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.141245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.141256 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.156036 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.174061 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.209881 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.229752 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.243541 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.243568 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.243576 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.243589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.243598 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.251672 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.269110 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.293110 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:50Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.346053 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.346130 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.346155 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.346222 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.346244 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.448647 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.448668 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.448676 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.448688 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.448697 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.550736 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.550770 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.550783 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.550799 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.550809 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.654094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.654163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.654185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.654214 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.654235 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.757439 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.757509 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.757531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.757557 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.757575 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.784970 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:48:11.494458189 +0000 UTC Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.860833 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.861195 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.861498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.861743 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.861945 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.965063 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.965139 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.965163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.965193 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.965215 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:50Z","lastTransitionTime":"2026-01-23T07:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.990268 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.996298 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c" containerID="74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992" exitCode=0 Jan 23 07:55:50 crc kubenswrapper[4982]: I0123 07:55:50.996358 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerDied","Data":"74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.016880 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.051411 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.068748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.068822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.068839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.068869 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.068890 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.075973 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.094983 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.113112 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.140351 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.158920 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.172395 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.172441 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.172452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.172469 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.172483 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.179213 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.194312 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.217369 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.235697 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.253655 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.276821 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.278326 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.278377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.278390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.278424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.278436 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.298581 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.315111 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:51Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.381736 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.381790 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.381805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.381867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.381881 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.485037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.485365 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.485374 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.485387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.485396 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.547271 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.547352 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.547383 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.547417 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.547444 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.547538 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.547585 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:59.547570864 +0000 UTC m=+34.027934260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.547933 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:55:59.547919503 +0000 UTC m=+34.028282899 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548022 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548042 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548057 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548095 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:59.548084478 +0000 UTC m=+34.028447874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548147 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548159 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548168 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548193 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:59.54818483 +0000 UTC m=+34.028548226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548244 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.548267 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:55:59.548260413 +0000 UTC m=+34.028623809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.587927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.587963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.587976 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.587993 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.588004 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.693500 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.693669 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.693694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.693717 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.693736 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.785920 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:54:04.121972349 +0000 UTC Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.796454 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.796505 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.796522 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.796546 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.796565 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.826900 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.828676 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.827454 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.828809 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.827420 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:51 crc kubenswrapper[4982]: E0123 07:55:51.828912 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.899965 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.900049 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.900077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.900115 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:51 crc kubenswrapper[4982]: I0123 07:55:51.900138 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:51Z","lastTransitionTime":"2026-01-23T07:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.002401 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.002520 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.002533 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.002551 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.002563 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.004339 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c" containerID="1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926" exitCode=0 Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.004404 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerDied","Data":"1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.017818 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.037461 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.055599 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.072506 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.084486 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.101439 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.105316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.105423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.105443 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.105470 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.105489 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.121442 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.150653 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.173210 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.185509 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.198694 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.211031 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.211064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.211073 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.211153 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.211167 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.213603 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.225278 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.237412 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.247610 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:52Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.315156 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.315219 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.315237 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.315263 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.315306 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.418791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.418839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.418855 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.418875 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.418893 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.522777 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.522843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.522862 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.522889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.522907 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.626084 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.626136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.626153 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.626176 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.626193 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.729755 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.729806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.729824 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.729852 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.729874 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.786824 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:31:32.452378061 +0000 UTC Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.833001 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.833058 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.833076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.833104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.833125 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.936074 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.936482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.936502 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.936526 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:52 crc kubenswrapper[4982]: I0123 07:55:52.936543 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:52Z","lastTransitionTime":"2026-01-23T07:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.014639 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" event={"ID":"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c","Type":"ContainerStarted","Data":"5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.034731 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.039497 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.039657 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.039701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.039735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.039760 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.055749 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.072423 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.112979 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.139511 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.141743 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.141765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.141775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.141790 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.141801 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.157487 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.169338 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.186025 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.204564 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.217784 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.232959 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.244811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.244999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.245074 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.245171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.245246 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.250090 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.263472 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.280110 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.294686 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:53Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.347640 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.347904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.348047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.348156 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.348270 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.451808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.451857 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.451871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.451896 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.451911 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.554276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.554312 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.554320 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.554334 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.554348 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.657268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.657506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.657577 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.657668 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.657743 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.760498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.760835 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.760922 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.761022 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.761100 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.827914 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:19:26.065241752 +0000 UTC Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.828043 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.828043 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:53 crc kubenswrapper[4982]: E0123 07:55:53.828169 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:53 crc kubenswrapper[4982]: E0123 07:55:53.828262 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.831741 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:53 crc kubenswrapper[4982]: E0123 07:55:53.831902 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.863424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.863552 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.863570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.863596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.863614 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.967080 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.967142 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.967159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.967184 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:53 crc kubenswrapper[4982]: I0123 07:55:53.967201 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:53Z","lastTransitionTime":"2026-01-23T07:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.024144 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.024680 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.024745 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.046738 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.060671 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.063978 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.070080 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.070142 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.070168 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.070198 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.070223 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.070617 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.092122 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.110562 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.129593 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.149235 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.164401 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.173517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.173591 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.173612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.173680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.173701 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.188194 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.206675 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.227304 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.244159 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.260569 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.276031 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.276130 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.276182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.276207 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.276225 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.292170 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.339550 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.367021 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.378707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.378756 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.378766 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.378785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.378796 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.393845 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.426694 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.441847 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.460781 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.478210 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.481057 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.481094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.481107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.481124 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.481135 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.499295 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.518123 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.539686 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.565791 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.584244 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.584361 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.584386 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.584418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.584443 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.589662 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.613932 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.634449 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.655154 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.681129 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.687339 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.687407 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.687427 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.687453 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.687472 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.700198 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:54Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.790693 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.790785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.790811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.790848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.790872 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.828197 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:14:35.54038432 +0000 UTC Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.893793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.893880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.893908 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.893943 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.893968 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.995936 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.995999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.996016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.996040 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:54 crc kubenswrapper[4982]: I0123 07:55:54.996054 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:54Z","lastTransitionTime":"2026-01-23T07:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.027111 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.098459 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.098498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.098509 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.098525 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.098536 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.200596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.200665 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.200685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.200704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.200718 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.303211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.303260 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.303276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.303295 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.303312 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.406088 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.406137 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.406152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.406172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.406187 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.509273 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.509350 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.509375 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.509409 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.509435 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.612547 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.612608 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.612655 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.612681 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.612701 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.716144 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.716187 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.716208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.716236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.716257 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.819042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.819103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.819120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.819146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.819163 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.827092 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:55 crc kubenswrapper[4982]: E0123 07:55:55.827242 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.827588 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.827778 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:55 crc kubenswrapper[4982]: E0123 07:55:55.827888 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:55 crc kubenswrapper[4982]: E0123 07:55:55.827772 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.828919 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:11:23.994189718 +0000 UTC Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.865770 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.886491 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.921179 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.921346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.921378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.921390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.921408 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.921421 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:55Z","lastTransitionTime":"2026-01-23T07:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.943275 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.953176 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.975814 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:55 crc kubenswrapper[4982]: I0123 07:55:55.993149 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.007027 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.023812 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.025647 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.025671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.025680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.025694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.025717 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.038849 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/0.log" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.043224 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.044747 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36" exitCode=1 Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.044895 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.045556 4982 scope.go:117] "RemoveContainer" containerID="d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.065104 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.111939 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.128455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.128491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.128502 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.128519 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.128530 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.129251 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.148337 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.164363 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.183339 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.208907 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.222568 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.231424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.231491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.231509 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.231536 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.231561 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.241543 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.251202 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.265848 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.288272 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.302385 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.317776 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.334986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.335025 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.335042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.335065 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.335084 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.337293 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.367330 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 07:55:55.289599 6258 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:55.289977 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:55.290010 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:55.290036 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:55.290050 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:55.290105 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:55.290124 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:55.290132 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:55.290146 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:55.290160 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:55.290157 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:55.290194 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:55.290200 6258 factory.go:656] Stopping watch factory\\\\nI0123 07:55:55.290229 6258 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.387523 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.403854 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.424992 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.438775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.438822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.438834 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.438852 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.438864 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.441280 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.463321 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.541036 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.541082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.541091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.541105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.541114 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.643264 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.643301 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.643310 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.643323 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.643333 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.746498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.746541 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.746554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.746571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.746582 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.829176 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:00:36.225880243 +0000 UTC Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.848791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.848828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.848837 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.848850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.848862 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.951853 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.951912 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.951931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.951958 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:56 crc kubenswrapper[4982]: I0123 07:55:56.951975 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:56Z","lastTransitionTime":"2026-01-23T07:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.051683 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/0.log" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.053860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.053900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.053917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.053940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.053958 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.057562 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.057880 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.082392 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.103531 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.123830 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.140132 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.201268 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.202397 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.202433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.202444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.202461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.202475 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.219138 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.234082 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.248859 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.264122 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.281698 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 07:55:55.289599 6258 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:55.289977 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:55.290010 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:55.290036 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:55.290050 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:55.290105 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:55.290124 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:55.290132 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:55.290146 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:55.290160 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:55.290157 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:55.290194 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:55.290200 6258 factory.go:656] Stopping watch factory\\\\nI0123 07:55:55.290229 6258 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.295505 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.309267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.309355 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.309378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.309406 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.309428 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.311474 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.324974 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.336062 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.353480 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.411540 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.411586 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.411604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.411651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.411668 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.458701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.459216 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.460068 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.460427 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.460724 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.480924 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.492065 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.492296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.492524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.492727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.492866 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.513424 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.517914 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.517975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.517992 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.518017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.518038 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.535923 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.539927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.539999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.540063 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.540090 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.540108 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.558518 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.563485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.563539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.563556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.563579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.563597 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.581904 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:57Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.582052 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.583741 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.583801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.583822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.583849 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.583867 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.686755 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.687387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.687588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.687766 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.687892 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.792663 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.792730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.792747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.792775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.792793 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.826533 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.826582 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.826745 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.826778 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.826986 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:57 crc kubenswrapper[4982]: E0123 07:55:57.827176 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.829854 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:55:48.483829183 +0000 UTC Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.895951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.896010 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.896032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.896060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.896080 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.999172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.999227 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.999244 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.999268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:57 crc kubenswrapper[4982]: I0123 07:55:57.999285 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:57Z","lastTransitionTime":"2026-01-23T07:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.065147 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/1.log" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.066023 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/0.log" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.070287 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6" exitCode=1 Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.070341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.070409 4982 scope.go:117] "RemoveContainer" containerID="d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.071679 4982 scope.go:117] "RemoveContainer" containerID="85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6" Jan 23 07:55:58 crc kubenswrapper[4982]: E0123 07:55:58.071936 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.095701 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.101760 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.101828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.101846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.101871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.101888 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.112775 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.131912 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.147582 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.168991 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.190438 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.205592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.205692 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.205713 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.205740 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.205758 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.210476 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.224912 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.240205 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.261261 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.281505 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.297058 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.309180 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.309251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.309267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.309284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.309296 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.314497 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.333409 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.362125 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 07:55:55.289599 6258 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:55.289977 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:55.290010 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:55.290036 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:55.290050 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:55.290105 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:55.290124 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:55.290132 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:55.290146 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:55.290160 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:55.290157 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:55.290194 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:55.290200 6258 factory.go:656] Stopping watch factory\\\\nI0123 07:55:55.290229 6258 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.414390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.414585 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.414821 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.414971 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.415123 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.424912 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f"] Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.425809 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.429569 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.429596 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.443899 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.463328 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.477691 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.496547 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.514792 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.519267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.519757 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.519864 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.519972 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.520010 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.533947 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.551901 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.568782 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.593381 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d41ff9e7183588c120f13cfda44e5e17e857d0ead87214ff1167703d39a0ab36\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"message\\\":\\\"ft/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 07:55:55.289599 6258 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:55.289977 6258 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:55.290010 6258 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:55.290036 6258 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:55.290050 6258 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:55.290105 6258 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:55.290124 6258 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:55.290132 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:55.290146 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:55.290160 6258 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:55.290157 6258 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:55.290194 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:55.290200 6258 factory.go:656] Stopping watch factory\\\\nI0123 07:55:55.290229 6258 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.620003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.620307 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.620363 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9798\" (UniqueName: \"kubernetes.io/projected/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-kube-api-access-d9798\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.620411 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.622471 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.622537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.622550 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.622570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.622581 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.625776 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.645695 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.661746 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.684478 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.701549 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.721058 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.721047 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.721128 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9798\" (UniqueName: \"kubernetes.io/projected/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-kube-api-access-d9798\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.721158 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.721202 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.722082 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.722167 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.728372 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.730276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.730345 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.730363 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.730387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.730403 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.739883 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:58Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.741788 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9798\" (UniqueName: \"kubernetes.io/projected/bf1104b4-b2b9-44ab-b85f-4f51228d1ad0-kube-api-access-d9798\") pod \"ovnkube-control-plane-749d76644c-4p58f\" (UID: \"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.747589 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" Jan 23 07:55:58 crc kubenswrapper[4982]: W0123 07:55:58.761665 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1104b4_b2b9_44ab_b85f_4f51228d1ad0.slice/crio-c0c599ef45c61518a0b894fa4c824c956e97295b8ca524840c11e4199ed84c38 WatchSource:0}: Error finding container c0c599ef45c61518a0b894fa4c824c956e97295b8ca524840c11e4199ed84c38: Status 404 returned error can't find the container with id c0c599ef45c61518a0b894fa4c824c956e97295b8ca524840c11e4199ed84c38 Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.830387 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:12:11.146210553 +0000 UTC Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.833511 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.833558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.833569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.833593 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.833604 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.935891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.935919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.935927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.935939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:58 crc kubenswrapper[4982]: I0123 07:55:58.935947 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:58Z","lastTransitionTime":"2026-01-23T07:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.037833 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.037902 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.037921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.037947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.037965 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.075239 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/1.log" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.080761 4982 scope.go:117] "RemoveContainer" containerID="85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.080944 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.083251 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" event={"ID":"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0","Type":"ContainerStarted","Data":"1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.083294 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" event={"ID":"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0","Type":"ContainerStarted","Data":"c0c599ef45c61518a0b894fa4c824c956e97295b8ca524840c11e4199ed84c38"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.093413 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.109648 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.122863 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.134718 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.140251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.140288 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.140302 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.140319 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.140332 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.151591 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.167177 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.184979 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.195265 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.209764 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.223318 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.234109 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.243054 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.243091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.243103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.243120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.243134 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.257713 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.272917 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.285190 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.294991 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.309816 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.345051 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.345303 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.345481 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.345694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.345822 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.448531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.448564 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.448571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.448583 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.448591 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.551412 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.551917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.551931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.551947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.551959 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.629741 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.629882 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.629936 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630042 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:56:15.629994906 +0000 UTC m=+50.110358342 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630082 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630107 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630126 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630150 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630196 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:15.630173091 +0000 UTC m=+50.110536517 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630265 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:15.630237642 +0000 UTC m=+50.110601118 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.630164 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630343 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630376 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630402 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630492 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:15.630470189 +0000 UTC m=+50.110833705 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.630504 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630753 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.630897 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:15.63086234 +0000 UTC m=+50.111225796 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.655173 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.655420 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.655568 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.655747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.656030 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.759619 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.759728 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.759754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.759781 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.759804 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.826481 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.826534 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.826540 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.826724 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.826914 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.827105 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.831188 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 07:55:14.174391723 +0000 UTC Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.862744 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.862794 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.862802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.862818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.862827 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.965254 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.965287 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.965297 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.965313 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.965326 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:55:59Z","lastTransitionTime":"2026-01-23T07:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.981698 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xxghm"] Jan 23 07:55:59 crc kubenswrapper[4982]: I0123 07:55:59.985579 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:55:59 crc kubenswrapper[4982]: E0123 07:55:59.985695 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.000404 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:55:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.020445 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.033202 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.033310 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ng7\" (UniqueName: \"kubernetes.io/projected/0efe4bf7-1550-4885-a344-d045b3a1c653-kube-api-access-g9ng7\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.048058 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.059760 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.067211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.067257 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.067274 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.067297 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.067315 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.077673 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.088068 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" event={"ID":"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0","Type":"ContainerStarted","Data":"da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.095230 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.114142 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.127104 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.134267 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.134374 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ng7\" (UniqueName: \"kubernetes.io/projected/0efe4bf7-1550-4885-a344-d045b3a1c653-kube-api-access-g9ng7\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:00 crc kubenswrapper[4982]: E0123 07:56:00.134416 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:00 crc kubenswrapper[4982]: E0123 07:56:00.134510 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:00.634485643 +0000 UTC m=+35.114849069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.145919 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.156833 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ng7\" (UniqueName: \"kubernetes.io/projected/0efe4bf7-1550-4885-a344-d045b3a1c653-kube-api-access-g9ng7\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.159364 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.169289 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.169359 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.169404 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.169456 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.169476 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.181116 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.192984 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.209510 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.228232 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.244512 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.262437 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.271383 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.271416 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.271428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.271444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.271455 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.276964 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.288951 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.302168 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.314242 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.326031 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.337998 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.357764 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.372819 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.374516 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.374690 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.374973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.375221 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.375449 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.388068 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.399763 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.415269 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.428104 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.441866 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.472159 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.478464 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.478674 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.478846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.478989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.479115 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.489340 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.504657 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.518962 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.541560 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:00Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.582806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.582856 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.582872 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.582898 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.582917 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.637857 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:00 crc kubenswrapper[4982]: E0123 07:56:00.638061 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:00 crc kubenswrapper[4982]: E0123 07:56:00.638130 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:01.638108306 +0000 UTC m=+36.118471722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.687050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.687275 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.687459 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.687608 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.687822 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.790756 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.790819 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.790837 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.790860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.790880 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.831965 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:28:55.316939111 +0000 UTC Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.893598 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.893683 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.893699 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.893726 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.893742 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.996723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.996783 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.996792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.996824 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:00 crc kubenswrapper[4982]: I0123 07:56:00.996836 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:00Z","lastTransitionTime":"2026-01-23T07:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.099375 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.099621 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.099840 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.100071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.100279 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.202793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.202857 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.202880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.202904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.202921 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.306100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.306166 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.306184 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.306208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.306226 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.409052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.409122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.409144 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.409172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.409193 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.511776 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.511840 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.511857 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.511882 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.511899 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.614570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.614660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.614679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.614702 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.614719 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.655296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:01 crc kubenswrapper[4982]: E0123 07:56:01.655522 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:01 crc kubenswrapper[4982]: E0123 07:56:01.655619 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:03.655594635 +0000 UTC m=+38.135958051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.717469 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.717531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.717548 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.717570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.717609 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.820889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.820970 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.820993 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.821020 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.821038 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.827299 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.827398 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:01 crc kubenswrapper[4982]: E0123 07:56:01.827461 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.827489 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:01 crc kubenswrapper[4982]: E0123 07:56:01.827736 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.827774 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:01 crc kubenswrapper[4982]: E0123 07:56:01.827912 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:01 crc kubenswrapper[4982]: E0123 07:56:01.828022 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.832542 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:54:32.95937016 +0000 UTC Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.924294 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.924366 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.924389 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.924418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:01 crc kubenswrapper[4982]: I0123 07:56:01.924441 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:01Z","lastTransitionTime":"2026-01-23T07:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.027536 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.027599 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.027617 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.027676 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.027696 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.130678 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.130751 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.130768 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.130801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.130818 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.234471 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.234535 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.234553 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.234581 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.234598 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.336621 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.336719 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.336737 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.336762 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.336779 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.440412 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.440474 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.440491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.440514 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.440531 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.543195 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.543263 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.543281 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.543310 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.543327 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.646528 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.646579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.646595 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.646616 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.646664 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.750053 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.750118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.750146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.750177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.750200 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.832678 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:54:16.848142384 +0000 UTC Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.853125 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.853179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.853199 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.853230 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.853249 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.956398 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.956496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.956514 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.956538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:02 crc kubenswrapper[4982]: I0123 07:56:02.956555 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:02Z","lastTransitionTime":"2026-01-23T07:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.059692 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.059811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.059830 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.059856 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.059876 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.162407 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.162466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.162484 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.162508 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.162525 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.266026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.266093 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.266111 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.266140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.266157 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.369505 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.369583 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.369601 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.370057 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.370118 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.473614 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.473732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.473754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.474066 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.474089 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.576683 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.576736 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.576753 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.576775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.576791 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.675350 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:03 crc kubenswrapper[4982]: E0123 07:56:03.675609 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:03 crc kubenswrapper[4982]: E0123 07:56:03.675767 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:07.675736992 +0000 UTC m=+42.156100418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.680673 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.680800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.680883 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.680973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.681012 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.784736 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.784815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.784844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.784876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.784901 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.827060 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.827124 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.827195 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:03 crc kubenswrapper[4982]: E0123 07:56:03.827249 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.827278 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:03 crc kubenswrapper[4982]: E0123 07:56:03.827425 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:03 crc kubenswrapper[4982]: E0123 07:56:03.827535 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:03 crc kubenswrapper[4982]: E0123 07:56:03.827690 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.832901 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 06:37:15.378559583 +0000 UTC Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.888730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.888796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.888818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.888846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.888868 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.991735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.991804 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.991876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.991909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:03 crc kubenswrapper[4982]: I0123 07:56:03.991934 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:03Z","lastTransitionTime":"2026-01-23T07:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.095052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.095113 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.095135 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.095167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.095188 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.197545 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.197643 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.197652 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.197669 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.197680 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.301030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.301122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.301140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.301163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.301179 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.404113 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.404142 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.404150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.404163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.404173 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.507389 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.507494 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.507516 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.507578 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.507598 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.610379 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.610421 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.610437 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.610460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.610477 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.713276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.713343 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.713367 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.713399 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.713424 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.815731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.815832 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.815850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.815875 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.815892 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.833228 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:35:29.196872804 +0000 UTC Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.919138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.919176 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.919189 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.919203 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:04 crc kubenswrapper[4982]: I0123 07:56:04.919212 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:04Z","lastTransitionTime":"2026-01-23T07:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.022100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.022169 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.022191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.022224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.022246 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.125331 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.125394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.125421 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.125450 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.125471 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.228293 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.228358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.228375 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.228402 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.228420 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.330167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.330213 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.330224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.330241 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.330254 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.436676 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.436727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.436746 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.436765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.436777 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.539891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.539962 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.539981 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.540006 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.540023 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.642592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.642714 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.642727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.642746 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.642757 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.745999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.746090 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.746101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.746119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.746130 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.826658 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.826874 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.826955 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:05 crc kubenswrapper[4982]: E0123 07:56:05.827108 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.827173 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:05 crc kubenswrapper[4982]: E0123 07:56:05.827423 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:05 crc kubenswrapper[4982]: E0123 07:56:05.827599 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:05 crc kubenswrapper[4982]: E0123 07:56:05.827829 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.835358 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 13:59:03.373988964 +0000 UTC Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.846468 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.851484 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.851563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.851590 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.851654 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.852721 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.871108 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.886038 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.905282 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.922073 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.938714 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.954664 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.956129 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.956177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.956191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.956209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.956240 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:05Z","lastTransitionTime":"2026-01-23T07:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.973250 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:05 crc kubenswrapper[4982]: I0123 07:56:05.997024 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.023905 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.041410 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.054118 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.058539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.058582 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.058593 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.058608 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.058676 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.074096 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.088021 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.103122 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.116118 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.132130 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.160751 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.160833 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.160852 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.161298 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.161356 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.264189 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.264260 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.264282 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.264310 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.264334 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.366826 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.366892 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.366914 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.366942 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.366961 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.470034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.470092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.470108 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.470130 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.470147 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.573566 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.573656 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.573675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.573697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.573715 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.676822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.676880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.676899 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.676924 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.676941 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.780167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.780242 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.780266 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.780294 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.780314 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.835981 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:02:15.382005666 +0000 UTC Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.883305 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.883374 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.883392 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.883417 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.883435 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.986941 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.987005 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.987026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.987052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:06 crc kubenswrapper[4982]: I0123 07:56:06.987072 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:06Z","lastTransitionTime":"2026-01-23T07:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.089793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.089863 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.089887 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.089913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.089932 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.192808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.192873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.192895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.192923 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.192943 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.296498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.296554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.296574 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.296604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.296654 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.399468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.399506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.399518 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.399536 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.399547 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.502467 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.502516 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.502531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.502551 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.502566 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.605901 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.605968 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.605989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.606017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.606038 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.709270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.709313 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.709324 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.709340 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.709351 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.719106 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:07 crc kubenswrapper[4982]: E0123 07:56:07.719270 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:07 crc kubenswrapper[4982]: E0123 07:56:07.719349 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:15.71932523 +0000 UTC m=+50.199688656 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.812212 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.812282 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.812300 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.812325 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.812343 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.826780 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.826821 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.826840 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.826970 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:07 crc kubenswrapper[4982]: E0123 07:56:07.827240 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:07 crc kubenswrapper[4982]: E0123 07:56:07.827456 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:07 crc kubenswrapper[4982]: E0123 07:56:07.827515 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:07 crc kubenswrapper[4982]: E0123 07:56:07.827664 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.836389 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:39:17.589987024 +0000 UTC Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.916098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.916195 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.916217 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.916252 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.916275 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.943556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.943668 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.943699 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.943732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.943756 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:07 crc kubenswrapper[4982]: E0123 07:56:07.968202 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:07Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.974076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.974132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.974152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.974185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:07 crc kubenswrapper[4982]: I0123 07:56:07.974206 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:07Z","lastTransitionTime":"2026-01-23T07:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: E0123 07:56:07.996687 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:07Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.008042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.008108 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.008126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.008153 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.008173 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: E0123 07:56:08.029611 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:08Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.035409 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.035465 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.035482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.035506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.035523 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: E0123 07:56:08.055664 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:08Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.060691 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.060757 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.060776 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.060800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.060818 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: E0123 07:56:08.080337 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:08Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:08 crc kubenswrapper[4982]: E0123 07:56:08.080549 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.081999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.082050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.082069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.082093 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.082110 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.184815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.184856 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.185029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.185057 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.185067 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.288537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.288683 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.288700 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.288724 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.288763 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.392362 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.392442 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.392460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.392488 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.392506 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.495304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.495360 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.495374 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.495421 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.495435 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.598041 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.598123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.598145 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.598220 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.598245 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.700774 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.700814 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.700830 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.700845 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.700856 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.803984 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.804042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.804060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.804086 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.804104 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.837308 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:08:10.355672039 +0000 UTC Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.906363 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.906411 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.906423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.906440 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:08 crc kubenswrapper[4982]: I0123 07:56:08.906452 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:08Z","lastTransitionTime":"2026-01-23T07:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.009376 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.009438 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.009460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.009482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.009495 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.112849 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.112921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.112946 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.112975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.112995 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.215979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.216033 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.216045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.216064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.216085 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.323360 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.323394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.323403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.323418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.323427 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.426304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.426354 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.426369 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.426401 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.426417 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.529629 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.529706 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.529722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.529747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.529764 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.632517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.632569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.632588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.632610 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.632632 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.735240 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.735272 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.735282 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.735296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.735307 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.826832 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.826956 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:09 crc kubenswrapper[4982]: E0123 07:56:09.827253 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.827092 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.826983 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:09 crc kubenswrapper[4982]: E0123 07:56:09.827423 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:09 crc kubenswrapper[4982]: E0123 07:56:09.827651 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:09 crc kubenswrapper[4982]: E0123 07:56:09.827733 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.837793 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 02:07:42.604762704 +0000 UTC Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.840118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.840201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.840224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.840261 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.840282 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.942707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.942770 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.942788 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.942818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:09 crc kubenswrapper[4982]: I0123 07:56:09.942836 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:09Z","lastTransitionTime":"2026-01-23T07:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.045432 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.045491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.045510 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.045533 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.045550 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.148315 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.148382 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.148400 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.148426 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.148443 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.251536 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.251660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.251687 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.251717 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.251739 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.354608 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.354701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.354722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.354745 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.354762 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.457740 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.457822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.457848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.457873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.457889 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.560702 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.560764 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.560781 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.560806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.560824 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.663850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.663936 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.663959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.663990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.664014 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.766657 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.766708 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.766722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.766739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.766750 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.838531 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:09:21.935765884 +0000 UTC Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.869179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.869243 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.869261 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.869285 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.869302 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.972786 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.972848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.972870 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.972900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:10 crc kubenswrapper[4982]: I0123 07:56:10.972924 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:10Z","lastTransitionTime":"2026-01-23T07:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.076092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.076155 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.076171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.076226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.076244 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.179747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.179866 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.179885 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.179953 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.179974 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.283798 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.283883 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.283903 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.283934 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.283950 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.387276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.387358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.387396 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.387427 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.387447 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.489612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.489669 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.489679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.489695 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.489706 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.592465 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.592497 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.592504 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.592517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.592528 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.695323 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.695387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.695410 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.695442 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.695465 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.797916 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.797986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.798008 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.798038 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.798065 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.826549 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.826609 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:11 crc kubenswrapper[4982]: E0123 07:56:11.826799 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.826856 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.826870 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:11 crc kubenswrapper[4982]: E0123 07:56:11.826986 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:11 crc kubenswrapper[4982]: E0123 07:56:11.827098 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:11 crc kubenswrapper[4982]: E0123 07:56:11.827227 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.838845 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:38:15.368531475 +0000 UTC Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.900607 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.900694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.900712 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.900734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:11 crc kubenswrapper[4982]: I0123 07:56:11.900752 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:11Z","lastTransitionTime":"2026-01-23T07:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.003401 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.003475 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.003497 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.003526 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.003547 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.106787 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.106842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.106865 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.106894 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.106916 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.210035 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.210093 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.210110 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.210134 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.210151 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.313963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.314094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.314115 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.314148 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.314169 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.417979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.418048 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.418072 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.418904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.418962 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.521918 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.521996 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.522013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.522039 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.522058 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.625299 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.625723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.625898 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.626101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.626229 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.729858 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.729960 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.729979 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.730003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.730022 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.833086 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.833474 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.833619 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.833864 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.834102 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.839385 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:38:36.847598049 +0000 UTC Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.937255 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.937321 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.937346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.937369 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:12 crc kubenswrapper[4982]: I0123 07:56:12.937386 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:12Z","lastTransitionTime":"2026-01-23T07:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.040669 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.040734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.040751 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.040776 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.040793 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.142933 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.142976 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.142986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.143002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.143015 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.246291 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.246359 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.246380 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.246407 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.246425 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.350050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.350089 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.350101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.350137 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.350146 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.453120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.453158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.453168 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.453182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.453191 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.555948 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.556038 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.556052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.556071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.556087 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.658249 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.658277 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.658286 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.658300 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.658308 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.761493 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.761538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.761550 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.761565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.761576 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.826755 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.826757 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.826783 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.826781 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:13 crc kubenswrapper[4982]: E0123 07:56:13.827405 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:13 crc kubenswrapper[4982]: E0123 07:56:13.827592 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:13 crc kubenswrapper[4982]: E0123 07:56:13.827733 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:13 crc kubenswrapper[4982]: E0123 07:56:13.827867 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.827957 4982 scope.go:117] "RemoveContainer" containerID="85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.839571 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:35:48.243202584 +0000 UTC Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.865324 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.865403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.865538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.865565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.865582 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.967867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.967915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.967927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.967945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:13 crc kubenswrapper[4982]: I0123 07:56:13.967957 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:13Z","lastTransitionTime":"2026-01-23T07:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.071800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.071842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.071854 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.071873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.071888 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.141764 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/1.log" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.145220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.145339 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.157981 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.174901 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.174947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.174957 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.174974 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.174986 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.179234 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.194180 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.210047 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.224882 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.243352 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.260658 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.276065 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.277553 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.277589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.277601 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.277622 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.277648 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.289214 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.301391 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.314969 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.324735 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.333109 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.348553 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.363414 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.376872 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.379211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.379246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.379255 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.379272 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.379282 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.388910 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:14Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.481853 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.481884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.481892 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.481906 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.481915 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.585175 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.585239 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.585256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.585282 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.585296 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.688474 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.688554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.688576 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.688609 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.688670 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.792025 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.792117 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.792136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.792169 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.792194 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.840491 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:21:29.116369588 +0000 UTC Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.894669 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.894727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.894745 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.894769 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.894782 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.998018 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.998083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.998108 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.998136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:14 crc kubenswrapper[4982]: I0123 07:56:14.998154 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:14Z","lastTransitionTime":"2026-01-23T07:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.107948 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.107997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.108007 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.108021 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.108030 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.149798 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/2.log" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.150327 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/1.log" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.153749 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f" exitCode=1 Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.153823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.153889 4982 scope.go:117] "RemoveContainer" containerID="85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.160894 4982 scope.go:117] "RemoveContainer" containerID="a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f" Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.161224 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.169882 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.184258 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.195412 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.206918 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.214306 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.214347 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.214358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.214373 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.214383 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.218969 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.228710 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.275569 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.294149 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.300980 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.314966 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.316338 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.316375 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.316385 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.316401 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.316412 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.326202 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.339308 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.351302 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.360830 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.371881 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.383293 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.393205 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.403341 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.419216 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.419406 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.419543 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.419689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.419880 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.522386 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.522422 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.522431 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.522444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.522453 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.624711 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.624774 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.624789 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.624802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.624811 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.696115 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.696260 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:56:47.696241877 +0000 UTC m=+82.176605263 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.696772 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.696923 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.697066 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.697200 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.696930 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.697566 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.697716 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.697905 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:47.697860331 +0000 UTC m=+82.178223747 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.697098 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.698179 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.697160 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.697284 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.698335 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.698471 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:47.698448197 +0000 UTC m=+82.178811593 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.698690 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:47.698672664 +0000 UTC m=+82.179036060 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.698792 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:47.698780247 +0000 UTC m=+82.179143643 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.727166 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.727238 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.727263 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.727291 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.727314 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.798648 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.798776 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.798832 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:56:31.798816392 +0000 UTC m=+66.279179788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.826696 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.826781 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.826834 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.826915 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.827024 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.827112 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.827715 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:15 crc kubenswrapper[4982]: E0123 07:56:15.827782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.830666 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.830694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.830705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.830722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.830737 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.841569 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:16:22.795442446 +0000 UTC Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.843746 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.870106 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.887581 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.901143 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.922426 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.933404 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.933687 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.933746 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.933771 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.933787 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:15Z","lastTransitionTime":"2026-01-23T07:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.941309 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.961753 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:15 crc kubenswrapper[4982]: I0123 07:56:15.983222 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.000800 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:15Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.026770 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.037149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.037199 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.037216 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.037239 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.037256 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.044981 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.064727 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.090781 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.115855 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.138134 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f1b88961dea700635dc1469b75446a8d1fcb6e3009fef8e3a8688bd86c99b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:55:57Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 07:55:57.008490 6381 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 07:55:57.010328 6381 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 07:55:57.010408 6381 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 07:55:57.010462 6381 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 07:55:57.010503 6381 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 07:55:57.010599 6381 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 07:55:57.010776 6381 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 07:55:57.010657 6381 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 07:55:57.010821 6381 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 07:55:57.010833 6381 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 07:55:57.010855 6381 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 07:55:57.010856 6381 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 07:55:57.010877 6381 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 07:55:57.010882 6381 factory.go:656] Stopping watch factory\\\\nI0123 07:55:57.010904 6381 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.139108 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.139238 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.140030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.140514 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.140699 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.158159 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/2.log" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.162836 4982 scope.go:117] "RemoveContainer" containerID="a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f" Jan 23 07:56:16 crc kubenswrapper[4982]: E0123 07:56:16.163067 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.177938 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.191139 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.207031 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.224431 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.242901 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.242966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.242978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.242995 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.243032 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.244411 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.255907 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.268726 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.281675 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.296856 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.309803 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.323292 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.340914 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.345865 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.345910 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.345922 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.345940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.345950 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.362974 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.377673 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.388324 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.405708 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.420023 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.432672 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.445013 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:16Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.448991 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.449153 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.449262 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.449383 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.449495 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.552425 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.552466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.552477 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.552496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.552509 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.655120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.655183 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.655204 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.655230 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.655247 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.758191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.758495 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.758721 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.758890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.759029 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.842547 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:00:41.196313773 +0000 UTC Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.862110 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.862169 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.862187 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.862210 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.862227 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.964939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.965017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.965073 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.965103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:16 crc kubenswrapper[4982]: I0123 07:56:16.965123 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:16Z","lastTransitionTime":"2026-01-23T07:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.067473 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.067525 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.067542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.067565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.067582 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.170553 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.170615 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.170679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.170709 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.170732 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.273136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.273197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.273219 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.273247 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.273271 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.376661 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.376743 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.376762 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.376787 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.376805 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.479541 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.479600 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.479617 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.479673 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.479691 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.583061 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.583131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.583154 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.583186 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.583203 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.686422 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.686480 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.686498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.686524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.686545 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.789270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.789333 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.789349 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.789374 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.789391 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.826268 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.826331 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:17 crc kubenswrapper[4982]: E0123 07:56:17.826449 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.826475 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.826471 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:17 crc kubenswrapper[4982]: E0123 07:56:17.826717 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:17 crc kubenswrapper[4982]: E0123 07:56:17.826920 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:17 crc kubenswrapper[4982]: E0123 07:56:17.827042 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.843913 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:19:51.040586424 +0000 UTC Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.892323 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.892396 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.892419 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.892448 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.892476 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.995771 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.995834 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.995853 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.995879 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:17 crc kubenswrapper[4982]: I0123 07:56:17.995900 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:17Z","lastTransitionTime":"2026-01-23T07:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.099725 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.099781 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.099800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.099825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.099842 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.197296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.197380 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.197405 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.197440 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.197463 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: E0123 07:56:18.217174 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:18Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.222876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.222936 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.222954 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.222978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.222997 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: E0123 07:56:18.239354 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:18Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.243698 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.243786 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.243805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.243832 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.243848 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: E0123 07:56:18.261361 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:18Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.265157 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.265250 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.265275 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.265306 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.265328 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: E0123 07:56:18.282268 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:18Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.285842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.285936 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.285960 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.285981 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.285997 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: E0123 07:56:18.300225 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:18Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:18 crc kubenswrapper[4982]: E0123 07:56:18.300442 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.302457 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.302540 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.302559 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.302580 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.302595 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.404589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.404705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.404757 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.404782 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.404798 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.507928 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.508052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.508071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.508098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.508114 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.610724 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.610792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.610815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.610843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.610861 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.713084 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.713141 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.713157 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.713181 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.713197 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.816071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.816140 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.816163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.816194 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.816214 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.845105 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:13:17.567934455 +0000 UTC Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.919179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.919272 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.919292 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.919316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:18 crc kubenswrapper[4982]: I0123 07:56:18.919333 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:18Z","lastTransitionTime":"2026-01-23T07:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.021828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.021872 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.021882 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.021898 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.021909 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.124104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.124171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.124189 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.124213 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.124230 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.226468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.226519 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.226531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.226552 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.226564 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.329668 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.329750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.329773 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.329808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.329833 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.432993 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.433045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.433062 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.433084 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.433101 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.536433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.536503 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.536522 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.536548 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.536566 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.639685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.639765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.639795 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.639829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.639851 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.743273 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.743336 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.743355 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.743380 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.743398 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.827189 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.827259 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.827323 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.827408 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:19 crc kubenswrapper[4982]: E0123 07:56:19.827405 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:19 crc kubenswrapper[4982]: E0123 07:56:19.827517 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:19 crc kubenswrapper[4982]: E0123 07:56:19.827696 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:19 crc kubenswrapper[4982]: E0123 07:56:19.827853 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.845244 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 03:04:35.472094584 +0000 UTC Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.845358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.845407 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.845426 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.845450 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.845468 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.947708 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.947734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.947742 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.947755 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:19 crc kubenswrapper[4982]: I0123 07:56:19.947763 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:19Z","lastTransitionTime":"2026-01-23T07:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.050773 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.050989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.050999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.051011 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.051020 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.153302 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.153352 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.153363 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.153380 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.153683 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.256703 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.256730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.256738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.256750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.256759 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.359595 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.359643 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.359651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.359664 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.359672 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.462345 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.462377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.462386 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.462398 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.462407 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.566284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.566333 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.566343 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.566361 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.566372 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.669410 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.669468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.669484 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.669507 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.669524 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.773578 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.773694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.773717 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.773741 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.773759 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.845788 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:14:46.010743294 +0000 UTC Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.876372 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.876423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.876439 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.876464 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.876481 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.956201 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.970359 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.980408 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.980456 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.980466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.980480 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.980489 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:20Z","lastTransitionTime":"2026-01-23T07:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:20 crc kubenswrapper[4982]: I0123 07:56:20.983810 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:20Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.002354 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.020464 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.041504 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.058050 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.072056 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.083534 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.083566 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.083575 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.083591 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.083600 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.086042 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.101983 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.122813 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.139007 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.154611 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.171392 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.185846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.185895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.185907 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.185925 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.185941 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.189326 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.211504 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.234876 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.249402 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.268392 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:21Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.288534 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.288569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.288580 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.288597 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.288612 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.390902 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.390961 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.390978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.391003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.391020 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.493783 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.493884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.493904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.493971 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.493990 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.596810 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.597037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.597101 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.597176 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.597302 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.700270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.700570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.700664 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.700729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.700784 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.804061 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.804316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.804387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.804469 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.804536 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.826549 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.826694 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.826592 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:21 crc kubenswrapper[4982]: E0123 07:56:21.831823 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.831969 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:21 crc kubenswrapper[4982]: E0123 07:56:21.831936 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:21 crc kubenswrapper[4982]: E0123 07:56:21.832320 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:21 crc kubenswrapper[4982]: E0123 07:56:21.832431 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.846245 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:31:02.531377593 +0000 UTC Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.907934 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.907989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.908007 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.908030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:21 crc kubenswrapper[4982]: I0123 07:56:21.908048 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:21Z","lastTransitionTime":"2026-01-23T07:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.010906 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.010974 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.010999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.011028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.011161 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.113788 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.113891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.113915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.113947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.113970 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.219968 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.220013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.220028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.220046 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.220060 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.322408 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.322458 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.322468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.322487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.322498 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.425291 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.425331 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.425342 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.425357 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.425368 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.528704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.528768 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.528784 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.528809 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.528828 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.631782 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.631878 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.631959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.631984 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.632001 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.734734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.734780 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.734793 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.734810 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.734823 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.837618 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.837779 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.837807 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.837836 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.837855 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.847145 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:37:13.100446644 +0000 UTC Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.940133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.940167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.940175 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.940187 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:22 crc kubenswrapper[4982]: I0123 07:56:22.940195 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:22Z","lastTransitionTime":"2026-01-23T07:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.043034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.043100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.043110 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.043122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.043131 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.146430 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.146491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.146514 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.146539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.146556 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.249132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.249185 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.249201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.249224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.249241 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.352297 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.352356 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.352372 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.352394 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.352410 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.455890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.455958 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.455975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.455998 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.456016 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.558604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.558707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.558730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.558760 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.558784 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.661805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.661852 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.661867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.661886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.661902 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.769598 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.769650 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.769662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.769679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.769690 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.827119 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.827187 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.827221 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.827185 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:23 crc kubenswrapper[4982]: E0123 07:56:23.827336 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:23 crc kubenswrapper[4982]: E0123 07:56:23.827594 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:23 crc kubenswrapper[4982]: E0123 07:56:23.827723 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:23 crc kubenswrapper[4982]: E0123 07:56:23.827903 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.847927 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:29:00.659968128 +0000 UTC Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.871981 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.872037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.872053 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.872076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.872095 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.975213 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.975280 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.975296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.975320 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:23 crc kubenswrapper[4982]: I0123 07:56:23.975338 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:23Z","lastTransitionTime":"2026-01-23T07:56:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.078427 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.078512 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.078537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.078564 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.078581 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.181226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.181293 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.181312 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.181336 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.181356 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.283931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.283982 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.283993 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.284060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.284075 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.387959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.388037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.388057 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.388084 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.388103 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.491350 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.491410 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.491429 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.491452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.491468 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.593829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.593873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.593885 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.593901 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.593912 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.696500 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.696538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.696547 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.696560 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.696569 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.798425 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.798467 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.798477 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.798493 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.798503 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.848393 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:48:38.642583698 +0000 UTC Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.901390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.901444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.901461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.901484 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:24 crc kubenswrapper[4982]: I0123 07:56:24.901502 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:24Z","lastTransitionTime":"2026-01-23T07:56:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.004601 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.004691 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.004713 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.004737 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.004754 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.107656 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.107701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.107713 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.107729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.107741 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.209416 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.209470 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.209488 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.209509 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.209526 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.314566 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.314672 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.314701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.314731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.314753 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.416727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.416771 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.416782 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.416801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.416813 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.520106 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.520170 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.520181 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.520197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.520209 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.622839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.622907 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.622918 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.622935 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.622946 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.725568 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.725662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.725680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.725703 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.725723 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.826923 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.826976 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.827049 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:25 crc kubenswrapper[4982]: E0123 07:56:25.827895 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.829834 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:25 crc kubenswrapper[4982]: E0123 07:56:25.830107 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:25 crc kubenswrapper[4982]: E0123 07:56:25.830333 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:25 crc kubenswrapper[4982]: E0123 07:56:25.830542 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.833490 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.833561 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.833589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.833675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.833710 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:25Z","lastTransitionTime":"2026-01-23T07:56:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.849606 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:50:36.655400209 +0000 UTC Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.854088 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:25Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.869959 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:25Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:25 crc kubenswrapper[4982]: I0123 07:56:25.888511 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:25Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.038167 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:25Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.044304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.044612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.044755 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.044895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.045016 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.060276 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.076963 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.111767 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.131577 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.147900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.147942 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.147955 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.147973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.147987 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.156516 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.178555 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.203406 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.223777 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.241502 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.250309 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.250359 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.250374 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.250393 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.250406 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.256267 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.277533 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.298664 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.316197 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.332610 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:26Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.353348 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.353409 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.353424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.353446 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.353461 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.456921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.457012 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.457040 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.457072 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.457094 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.560791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.560841 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.560851 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.560871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.560881 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.663560 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.663613 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.663651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.663668 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.663678 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.766890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.767479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.767496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.767533 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.767554 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.849812 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:49:02.780370478 +0000 UTC Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.870056 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.870107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.870119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.870139 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.870151 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.973381 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.973428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.973436 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.973455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:26 crc kubenswrapper[4982]: I0123 07:56:26.973465 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:26Z","lastTransitionTime":"2026-01-23T07:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.075900 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.075995 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.076014 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.076042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.076061 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.178316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.178381 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.178396 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.178418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.178431 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.281009 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.281075 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.281093 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.281118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.281137 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.384277 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.384343 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.384361 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.384387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.384406 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.487897 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.487986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.488006 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.488034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.488053 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.591106 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.591187 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.591209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.591236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.591256 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.694928 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.694998 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.695016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.695042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.695060 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.798757 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.798836 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.798855 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.798889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.798907 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.827404 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.827428 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.827505 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.827885 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:27 crc kubenswrapper[4982]: E0123 07:56:27.828069 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.828233 4982 scope.go:117] "RemoveContainer" containerID="a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f" Jan 23 07:56:27 crc kubenswrapper[4982]: E0123 07:56:27.828265 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:27 crc kubenswrapper[4982]: E0123 07:56:27.828427 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:27 crc kubenswrapper[4982]: E0123 07:56:27.828513 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:56:27 crc kubenswrapper[4982]: E0123 07:56:27.828598 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.850224 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:56:08.302054888 +0000 UTC Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.903904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.904017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.904051 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.904091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:27 crc kubenswrapper[4982]: I0123 07:56:27.904128 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:27Z","lastTransitionTime":"2026-01-23T07:56:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.009404 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.009510 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.009531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.009558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.009578 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.112991 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.113080 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.113100 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.113134 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.113156 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.216150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.216245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.216269 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.216307 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.216332 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.320045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.320130 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.320151 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.320189 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.320217 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.423966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.424028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.424043 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.424061 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.424071 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.450282 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.450380 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.450414 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.450450 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.450476 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: E0123 07:56:28.476646 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:28Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.483233 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.483356 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.483387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.483449 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.483474 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: E0123 07:56:28.503271 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:28Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.509345 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.509468 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.509494 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.509534 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.509557 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: E0123 07:56:28.534231 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:28Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.540107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.540159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.540177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.540208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.540227 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: E0123 07:56:28.557865 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:28Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.561771 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.561801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.561811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.561825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.561834 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: E0123 07:56:28.576413 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:28Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:28 crc kubenswrapper[4982]: E0123 07:56:28.576551 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.578510 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.578534 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.578544 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.578556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.578565 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.681543 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.681675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.681690 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.681707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.681722 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.784904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.784972 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.784990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.785016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.785035 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.851390 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:27:16.222193095 +0000 UTC Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.888571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.888688 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.888707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.888735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.888753 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.992873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.992931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.992949 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.992978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:28 crc kubenswrapper[4982]: I0123 07:56:28.992997 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:28Z","lastTransitionTime":"2026-01-23T07:56:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.096995 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.097040 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.097051 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.097067 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.097077 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.200125 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.200207 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.200230 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.200260 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.200279 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.303873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.303940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.303951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.303974 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.303988 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.407671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.407732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.407746 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.407772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.407786 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.511092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.511138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.511154 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.511174 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.511185 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.614413 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.614470 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.614483 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.614506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.614519 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.719110 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.719188 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.719206 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.719806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.719868 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.823484 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.823541 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.823619 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.823684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.823760 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.828976 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.829110 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.829113 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.829018 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:29 crc kubenswrapper[4982]: E0123 07:56:29.829240 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:29 crc kubenswrapper[4982]: E0123 07:56:29.829349 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:29 crc kubenswrapper[4982]: E0123 07:56:29.829480 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:29 crc kubenswrapper[4982]: E0123 07:56:29.829578 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.852267 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 14:51:13.48386401 +0000 UTC Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.927064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.927113 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.927126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.927149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:29 crc kubenswrapper[4982]: I0123 07:56:29.927164 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:29Z","lastTransitionTime":"2026-01-23T07:56:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.030167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.030211 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.030224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.030239 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.030250 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.133726 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.133808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.133828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.133860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.133885 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.236679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.236747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.236764 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.236789 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.236809 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.342453 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.342516 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.342531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.342555 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.342571 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.446457 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.446501 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.446512 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.446529 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.446543 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.549644 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.549683 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.549691 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.549710 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.549719 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.653150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.653202 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.653210 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.653227 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.653237 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.756389 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.756469 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.756495 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.756535 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.756561 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.853270 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:00:44.80573285 +0000 UTC Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.859969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.860054 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.860077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.860104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.860123 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.963242 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.963911 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.964035 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.964150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:30 crc kubenswrapper[4982]: I0123 07:56:30.964258 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:30Z","lastTransitionTime":"2026-01-23T07:56:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.066737 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.067263 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.067360 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.067481 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.067584 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.175147 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.175241 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.175262 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.175290 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.175317 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.278596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.278686 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.278705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.278731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.278754 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.381484 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.381561 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.381581 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.381606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.381661 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.484482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.484542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.484561 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.484587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.484605 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.587621 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.587716 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.587734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.587759 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.587780 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.690951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.691005 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.691016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.691035 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.691046 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.794107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.794158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.794172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.794192 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.794203 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.802703 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:31 crc kubenswrapper[4982]: E0123 07:56:31.802885 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:31 crc kubenswrapper[4982]: E0123 07:56:31.803012 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:57:03.802982953 +0000 UTC m=+98.283346379 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.826668 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.826693 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.826784 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:31 crc kubenswrapper[4982]: E0123 07:56:31.826890 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:31 crc kubenswrapper[4982]: E0123 07:56:31.826819 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:31 crc kubenswrapper[4982]: E0123 07:56:31.827079 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.827166 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:31 crc kubenswrapper[4982]: E0123 07:56:31.827313 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.854303 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:11:54.814885302 +0000 UTC Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.904785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.905028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.905092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.905156 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:31 crc kubenswrapper[4982]: I0123 07:56:31.905214 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:31Z","lastTransitionTime":"2026-01-23T07:56:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.007150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.007187 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.007197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.007212 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.007222 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.109744 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.109959 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.110074 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.110154 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.110214 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.212977 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.213005 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.213013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.213027 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.213037 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.233660 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/0.log" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.233699 4982 generic.go:334] "Generic (PLEG): container finished" podID="a3e50013-d63d-4e43-924e-74237cd4a690" containerID="574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4" exitCode=1 Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.233721 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerDied","Data":"574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.234025 4982 scope.go:117] "RemoveContainer" containerID="574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.251922 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.278493 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.298392 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.313268 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.314990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.315028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.315037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.315050 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.315060 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.331233 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.354234 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.375770 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.400940 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.413215 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.417032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.417071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.417083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.417097 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.417106 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.429218 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.443123 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.455446 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.467984 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.480055 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.489811 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.500854 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.510972 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.518606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.518648 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.518657 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.518671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.518680 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.524543 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:32Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.620953 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.620978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.620988 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.621003 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.621015 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.723829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.723884 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.723903 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.723926 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.723945 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.826072 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.826325 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.826334 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.826348 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.826357 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.855440 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:58:17.057626369 +0000 UTC Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.929184 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.929242 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.929261 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.929285 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:32 crc kubenswrapper[4982]: I0123 07:56:32.929303 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:32Z","lastTransitionTime":"2026-01-23T07:56:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.031775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.031826 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.031843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.031866 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.031883 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.134694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.134755 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.134770 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.134795 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.134813 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.236939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.237001 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.237028 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.237133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.237155 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.240452 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/0.log" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.240518 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerStarted","Data":"1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.260706 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.274070 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.292084 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.306974 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.325823 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.340162 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.340227 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.340246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.340273 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.340294 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.341538 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.369743 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.382406 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.397706 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.412160 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.442945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.443006 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.443026 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.443064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.443083 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.443306 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.460297 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.476250 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.488159 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.508946 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.523332 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.538881 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.546270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.546314 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.546323 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.546342 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.546353 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.557783 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:33Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.649471 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.649550 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.649569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.649596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.649618 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.752312 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.752391 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.752410 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.752441 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.752464 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.826902 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.827036 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.827068 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:33 crc kubenswrapper[4982]: E0123 07:56:33.827135 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.827260 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:33 crc kubenswrapper[4982]: E0123 07:56:33.827265 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:33 crc kubenswrapper[4982]: E0123 07:56:33.827438 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:33 crc kubenswrapper[4982]: E0123 07:56:33.827584 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.855332 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.855383 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.855397 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.855414 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.855427 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.855593 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:20:31.200592006 +0000 UTC Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.958694 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.958738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.958750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.958765 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:33 crc kubenswrapper[4982]: I0123 07:56:33.958776 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:33Z","lastTransitionTime":"2026-01-23T07:56:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.060344 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.060433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.060452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.060511 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.060529 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.163660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.163720 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.163738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.163764 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.163783 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.266316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.266370 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.266389 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.266414 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.266432 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.369158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.369239 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.369266 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.369300 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.369324 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.471566 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.471618 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.471651 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.471668 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.471682 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.574515 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.574544 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.574554 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.574569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.574579 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.677354 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.677401 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.677413 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.677431 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.677445 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.826576 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.826676 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.826695 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.826719 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.826735 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.856261 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:33:46.343945587 +0000 UTC Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.930947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.930990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.931002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.931020 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:34 crc kubenswrapper[4982]: I0123 07:56:34.931031 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:34Z","lastTransitionTime":"2026-01-23T07:56:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.033969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.034019 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.034039 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.034061 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.034076 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.136455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.136498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.136506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.136521 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.136531 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.238842 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.238881 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.238893 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.238909 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.238919 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.340590 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.340662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.340671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.340686 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.340696 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.442531 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.442558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.442567 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.442579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.442588 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.544784 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.544831 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.544847 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.544870 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.544888 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.646494 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.646573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.646588 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.646602 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.646871 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.750091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.750175 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.750255 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.750292 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.750343 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.826709 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:35 crc kubenswrapper[4982]: E0123 07:56:35.827013 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.827686 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:35 crc kubenswrapper[4982]: E0123 07:56:35.828216 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.828499 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:35 crc kubenswrapper[4982]: E0123 07:56:35.828660 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.829693 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:35 crc kubenswrapper[4982]: E0123 07:56:35.829971 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.844615 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.853096 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.854033 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.854105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.854172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.854231 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.855033 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.856901 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:26:01.474903283 +0000 UTC Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.865710 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.876131 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.887375 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.896082 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.913285 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.922321 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.932110 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.941244 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.957190 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.957221 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.957229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.957243 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.957251 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:35Z","lastTransitionTime":"2026-01-23T07:56:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.960220 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.973935 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:35 crc kubenswrapper[4982]: I0123 07:56:35.987002 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.000924 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:35Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.010812 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:36Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.024124 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:36Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.034589 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:36Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.044861 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:36Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.059710 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.059750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.059763 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.059779 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.059791 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.161750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.161784 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.161792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.161804 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.161812 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.265517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.265556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.265566 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.265582 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.265594 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.369300 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.369406 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.369429 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.369498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.369519 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.472289 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.472327 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.472339 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.472354 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.472364 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.574671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.574712 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.574720 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.574735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.574745 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.681488 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.681579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.681601 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.681654 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.681686 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.784951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.785032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.785059 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.785086 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.785106 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.837861 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.858361 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:48:13.987540521 +0000 UTC Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.887885 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.887946 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.887965 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.887990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.888009 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.991196 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.991236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.991244 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.991257 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:36 crc kubenswrapper[4982]: I0123 07:56:36.991266 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:36Z","lastTransitionTime":"2026-01-23T07:56:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.096395 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.096434 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.096445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.096461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.096472 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.198388 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.198423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.198435 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.198450 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.198460 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.300944 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.300973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.300983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.300999 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.301010 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.403546 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.403584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.403595 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.403612 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.403660 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.505952 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.506013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.506030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.506109 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.506128 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.608057 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.608095 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.608103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.608118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.608130 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.710075 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.710110 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.710120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.710136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.710146 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.812396 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.812437 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.812445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.812459 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.812470 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.826659 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.826733 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.826802 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:37 crc kubenswrapper[4982]: E0123 07:56:37.826905 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.826939 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:37 crc kubenswrapper[4982]: E0123 07:56:37.827020 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:37 crc kubenswrapper[4982]: E0123 07:56:37.827256 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:37 crc kubenswrapper[4982]: E0123 07:56:37.827326 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.858818 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:08:09.224506073 +0000 UTC Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.914653 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.914715 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.914729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.914745 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:37 crc kubenswrapper[4982]: I0123 07:56:37.914757 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:37Z","lastTransitionTime":"2026-01-23T07:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.017483 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.017533 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.017549 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.017569 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.017588 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.120114 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.120152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.120163 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.120178 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.120189 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.222286 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.222311 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.222318 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.222329 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.222336 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.324393 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.324429 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.324437 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.324452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.324461 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.427152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.427186 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.427195 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.427208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.427219 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.529863 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.529940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.529963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.529989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.530004 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.632771 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.632808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.632818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.632849 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.632857 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.735141 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.735204 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.735222 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.735247 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.735264 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.782209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.782246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.782255 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.782270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.782283 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: E0123 07:56:38.799774 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:38Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.804055 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.804098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.804109 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.804125 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.804139 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: E0123 07:56:38.822410 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:38Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.827360 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.827454 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.827491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.827515 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.827529 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: E0123 07:56:38.846204 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:38Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.850734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.850816 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.850829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.850849 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.850863 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.859062 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:55:54.436627047 +0000 UTC Jan 23 07:56:38 crc kubenswrapper[4982]: E0123 07:56:38.864268 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:38Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.868232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.868258 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.868268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.868287 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.868298 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: E0123 07:56:38.881498 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:38Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:38 crc kubenswrapper[4982]: E0123 07:56:38.881605 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.883002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.883034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.883044 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.883056 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.883064 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.985888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.985927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.985938 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.985954 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:38 crc kubenswrapper[4982]: I0123 07:56:38.985966 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:38Z","lastTransitionTime":"2026-01-23T07:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.088926 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.089012 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.089030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.089055 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.089107 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.192490 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.192614 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.192739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.192776 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.192800 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.295758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.295821 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.295841 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.295866 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.295886 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.398844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.398903 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.398916 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.398935 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.398948 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.502462 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.502570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.502677 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.502758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.502777 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.605433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.605513 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.605537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.605572 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.605596 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.708593 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.708680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.708698 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.708724 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.708741 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.812087 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.812150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.812168 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.812191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.812211 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.826438 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.826489 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:39 crc kubenswrapper[4982]: E0123 07:56:39.826537 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.826517 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.826617 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:39 crc kubenswrapper[4982]: E0123 07:56:39.826755 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:39 crc kubenswrapper[4982]: E0123 07:56:39.826903 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:39 crc kubenswrapper[4982]: E0123 07:56:39.827052 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.832405 4982 scope.go:117] "RemoveContainer" containerID="a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.860047 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:39:48.324241113 +0000 UTC Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.916080 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.916179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.916198 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.916223 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:39 crc kubenswrapper[4982]: I0123 07:56:39.916241 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:39Z","lastTransitionTime":"2026-01-23T07:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.020524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.020579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.020795 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.020828 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.020864 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.125549 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.125714 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.125807 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.125930 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.126127 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.229965 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.230002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.230022 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.230039 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.230050 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.263907 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/2.log" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.266846 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.267448 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.278591 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.289248 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.306693 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.319497 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.335646 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.335685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.335697 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.335731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.335740 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.337049 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.347958 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.368997 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.383385 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.395472 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.418264 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.438563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.438646 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.438688 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.438739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.438697 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.438753 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.452117 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.461983 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.480429 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.489763 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501dce6d-6209-49a1-a8cd-f6c3e0bb1c95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ea6ad6990a1f43f3c8f6c761c6c38dc43ecba1012df4a9af76c765f7a60c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.510248 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.521850 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.532101 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.541405 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.541428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.541439 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.541452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.541478 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.543871 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:40Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.643806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.643848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.643855 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.643867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.643877 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.746253 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.746315 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.746334 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.746358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.746376 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.849277 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.849347 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.849367 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.849398 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.849419 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.861173 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:12:40.683230187 +0000 UTC Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.951894 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.951933 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.951941 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.951955 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:40 crc kubenswrapper[4982]: I0123 07:56:40.951965 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:40Z","lastTransitionTime":"2026-01-23T07:56:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.054872 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.054938 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.054955 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.054978 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.054998 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.157599 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.157687 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.157706 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.157731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.157748 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.260576 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.260644 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.260657 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.260675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.260688 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.274354 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/3.log" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.276243 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/2.log" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.281076 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" exitCode=1 Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.281161 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.281244 4982 scope.go:117] "RemoveContainer" containerID="a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.282143 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 07:56:41 crc kubenswrapper[4982]: E0123 07:56:41.282395 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.311859 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.335144 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.352947 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.364431 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.364487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.364498 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.364517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.364531 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.375175 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.391326 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.409967 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.430890 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.451986 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.468492 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.468549 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.468567 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.468592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.468613 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.473969 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.506335 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a443f0badc24dfe29123577624497e3fe2f2141d6449354c3bfb6aa6f8046b0f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:14Z\\\",\\\"message\\\":\\\"ln5h\\\\nI0123 07:56:14.625716 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-pqvl7\\\\nI0123 07:56:14.625732 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0123 07:56:14.625734 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625742 6592 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0123 07:56:14.625753 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0123 07:56:14.625742 6592 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0123 07:56:14.625597 6592 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-xxghm\\\\nF0123 07:56:14.625734 6592 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:41Z\\\",\\\"message\\\":\\\"t{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0123 07:56:40.868264 7002 lb_config.go:1031] Cluster endpoints for openshift-marketplace/certified-operators for network=default are: map[]\\\\nI0123 07:56:40.868291 7002 services_controller.go:443] Built service openshift-marketplace/certified-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.214\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0123 07:56:40.868312 7002 services_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.523306 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501dce6d-6209-49a1-a8cd-f6c3e0bb1c95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ea6ad6990a1f43f3c8f6c761c6c38dc43ecba1012df4a9af76c765f7a60c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.562960 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.572032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.572080 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.572098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.572124 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.572142 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.588328 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.605498 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.631047 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.649752 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.668017 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.674927 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.674983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.674997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.675016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.675028 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.689758 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.710782 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:41Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.778044 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.778109 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.778130 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.778159 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.778178 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.826669 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.826660 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.826723 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.826862 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:41 crc kubenswrapper[4982]: E0123 07:56:41.827101 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:41 crc kubenswrapper[4982]: E0123 07:56:41.827486 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:41 crc kubenswrapper[4982]: E0123 07:56:41.827617 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:41 crc kubenswrapper[4982]: E0123 07:56:41.827687 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.861679 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:24:13.59717064 +0000 UTC Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.881143 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.881191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.881208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.881229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.881245 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.984525 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.984596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.984619 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.984691 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:41 crc kubenswrapper[4982]: I0123 07:56:41.984714 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:41Z","lastTransitionTime":"2026-01-23T07:56:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.087473 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.087535 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.087561 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.087750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.087795 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.190771 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.190811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.190825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.190844 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.190858 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.286270 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/3.log" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.290208 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 07:56:42 crc kubenswrapper[4982]: E0123 07:56:42.290390 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.294272 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.294321 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.294337 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.294390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.294407 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.316748 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.334968 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.355175 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.375974 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.391793 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.398037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.398109 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.398128 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.398565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.398621 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.411501 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.427544 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501dce6d-6209-49a1-a8cd-f6c3e0bb1c95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ea6ad6990a1f43f3c8f6c761c6c38dc43ecba1012df4a9af76c765f7a60c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.458767 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.478993 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.500938 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.504969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.505034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.505051 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.505077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.505095 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.520162 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.601979 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:41Z\\\",\\\"message\\\":\\\"t{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0123 07:56:40.868264 7002 lb_config.go:1031] Cluster endpoints for openshift-marketplace/certified-operators for network=default are: map[]\\\\nI0123 07:56:40.868291 7002 services_controller.go:443] Built service openshift-marketplace/certified-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.214\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0123 07:56:40.868312 7002 services_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.608409 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.608460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.608477 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.608501 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.608520 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.621919 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.644148 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.665532 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.694376 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.711382 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.712336 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.712377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.712388 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.712405 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.712416 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.737581 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.757701 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:42Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.816009 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.816077 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.816099 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.816126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.816146 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.862085 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:41:13.768399403 +0000 UTC Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.919455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.919538 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.919559 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.919584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:42 crc kubenswrapper[4982]: I0123 07:56:42.919601 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:42Z","lastTransitionTime":"2026-01-23T07:56:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.022326 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.022392 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.022413 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.022437 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.022455 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.125369 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.125411 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.125422 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.125437 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.125449 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.229512 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.229575 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.229591 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.229616 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.229666 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.334521 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.334882 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.334932 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.334963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.334989 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.438123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.438209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.438229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.438257 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.438276 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.542692 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.542761 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.542783 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.542815 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.542833 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.646274 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.646354 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.646373 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.646399 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.646415 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.749499 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.749546 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.749555 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.749573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.749583 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.826966 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:43 crc kubenswrapper[4982]: E0123 07:56:43.827139 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.826980 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.827216 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:43 crc kubenswrapper[4982]: E0123 07:56:43.827405 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:43 crc kubenswrapper[4982]: E0123 07:56:43.827506 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.827618 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:43 crc kubenswrapper[4982]: E0123 07:56:43.827782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.858937 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.859094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.859126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.859167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.859200 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.862871 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:51:43.874270183 +0000 UTC Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.963346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.963432 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.963452 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.963496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:43 crc kubenswrapper[4982]: I0123 07:56:43.963518 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:43Z","lastTransitionTime":"2026-01-23T07:56:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.068201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.068274 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.068293 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.068326 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.068345 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.172563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.172655 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.172675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.172701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.172721 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.276747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.276829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.276852 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.276883 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.276906 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.381098 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.381189 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.381209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.381265 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.381285 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.484792 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.484849 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.484865 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.484888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.484908 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.588807 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.588859 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.588880 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.588902 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.588920 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.692284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.692400 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.692427 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.692459 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.692480 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.795972 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.796029 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.796042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.796062 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.796079 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.863705 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:55:54.257676298 +0000 UTC Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.899063 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.899166 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.899217 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.899245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:44 crc kubenswrapper[4982]: I0123 07:56:44.899294 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:44Z","lastTransitionTime":"2026-01-23T07:56:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.004244 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.004330 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.004348 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.004373 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.004391 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.109019 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.109120 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.109138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.109200 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.109220 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.212601 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.212748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.212772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.212806 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.212829 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.315462 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.315586 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.315688 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.315722 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.315742 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.419276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.419331 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.419349 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.419377 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.419397 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.536366 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.536455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.536479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.536512 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.536542 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.640429 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.640496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.640512 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.640540 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.640559 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.743084 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.743126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.743137 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.743153 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.743164 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.826424 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.826564 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.826590 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.826450 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:45 crc kubenswrapper[4982]: E0123 07:56:45.826704 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:45 crc kubenswrapper[4982]: E0123 07:56:45.826829 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:45 crc kubenswrapper[4982]: E0123 07:56:45.827002 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:45 crc kubenswrapper[4982]: E0123 07:56:45.827123 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.842008 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.846463 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.846523 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.846542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.846565 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.846583 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.863947 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.864333 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:34:52.519693986 +0000 UTC Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.884822 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.904386 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.919871 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.940253 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.950226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.950286 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.950301 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.950327 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.950344 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:45Z","lastTransitionTime":"2026-01-23T07:56:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.960370 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:45 crc kubenswrapper[4982]: I0123 07:56:45.986147 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:45Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.007698 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.028606 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.053621 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.054014 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.054059 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.054076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.054104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.054123 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.073757 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.093066 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.109282 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501dce6d-6209-49a1-a8cd-f6c3e0bb1c95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ea6ad6990a1f43f3c8f6c761c6c38dc43ecba1012df4a9af76c765f7a60c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.138957 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.158005 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.158049 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.158060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.158083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.158040 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.158096 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.178753 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.197926 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.229245 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:41Z\\\",\\\"message\\\":\\\"t{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0123 07:56:40.868264 7002 lb_config.go:1031] Cluster endpoints for openshift-marketplace/certified-operators for network=default are: map[]\\\\nI0123 07:56:40.868291 7002 services_controller.go:443] Built service openshift-marketplace/certified-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.214\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0123 07:56:40.868312 7002 services_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:46Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.261756 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.261818 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.261837 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.261868 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.261890 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.365730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.365802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.365820 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.365850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.365870 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.469939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.470030 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.470055 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.470092 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.470115 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.574571 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.574658 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.574678 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.574707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.574725 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.679119 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.679197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.679215 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.679248 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.679267 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.784114 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.784209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.784229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.784267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.784288 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.865182 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:54:27.722382134 +0000 UTC Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.888087 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.888150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.888167 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.888193 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.888211 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.992175 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.992567 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.992791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.993034 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:46 crc kubenswrapper[4982]: I0123 07:56:46.993223 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:46Z","lastTransitionTime":"2026-01-23T07:56:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.097433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.097493 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.097510 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.097534 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.097552 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.200947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.201025 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.201051 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.201083 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.201105 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.303871 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.303961 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.303987 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.304020 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.304044 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.408600 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.409557 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.409778 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.409960 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.410121 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.515060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.515129 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.515149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.515176 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.515197 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.619032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.619102 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.619123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.619151 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.619169 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.722904 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.722975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.722994 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.723019 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.723038 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.771575 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.771860 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.771822512 +0000 UTC m=+146.252185938 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.771979 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.772103 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.772172 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.772240 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772261 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772295 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772316 4982 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772359 4982 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772376 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.772360307 +0000 UTC m=+146.252723733 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772477 4982 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772508 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.77247432 +0000 UTC m=+146.252837746 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772540 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772584 4982 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772602 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.772576163 +0000 UTC m=+146.252939589 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772618 4982 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.772798 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.772757837 +0000 UTC m=+146.253121423 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.826569 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.826580 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.826556 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.827115 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.827161 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.827180 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.827209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.827232 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.827685 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.827676 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.828114 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.828257 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:47 crc kubenswrapper[4982]: E0123 07:56:47.828314 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.866085 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:40:56.684470363 +0000 UTC Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.931174 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.931246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.931270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.931301 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:47 crc kubenswrapper[4982]: I0123 07:56:47.931323 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:47Z","lastTransitionTime":"2026-01-23T07:56:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.035053 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.035685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.035935 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.036298 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.036455 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.140115 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.140256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.140284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.140319 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.140339 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.243829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.243913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.243940 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.243976 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.244002 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.347594 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.347710 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.347739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.347770 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.347788 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.451689 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.451961 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.451986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.452018 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.452040 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.557224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.557294 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.557312 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.557340 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.557362 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.661587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.661678 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.661711 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.661749 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.661781 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.768171 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.768475 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.768499 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.768666 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.768697 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.867155 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:10:39.145939476 +0000 UTC Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.878374 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.878425 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.878444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.878471 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.878493 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.936706 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.936767 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.936785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.936814 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.936834 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: E0123 07:56:48.959596 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.966685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.966772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.966796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.966829 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.966853 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:48 crc kubenswrapper[4982]: E0123 07:56:48.989873 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:48Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.996558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.996678 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.996705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.996733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:48 crc kubenswrapper[4982]: I0123 07:56:48.996753 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:48Z","lastTransitionTime":"2026-01-23T07:56:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.019529 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.026052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.026129 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.026149 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.026177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.026197 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.047398 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.053614 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.053707 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.053726 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.053754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.053777 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.075335 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:49Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.075571 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.078220 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.078308 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.078334 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.078365 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.078388 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.182574 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.182748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.182775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.182811 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.182839 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.286723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.286805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.286825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.286858 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.286881 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.391131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.391218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.391238 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.391267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.391291 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.495051 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.495131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.495150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.495173 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.495191 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.600796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.600888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.600913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.600948 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.600976 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.704131 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.704256 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.704290 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.704333 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.704369 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.811430 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.811505 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.811524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.811552 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.811572 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.826989 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.826995 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.827069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.827521 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.827272 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.827079 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.827723 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:49 crc kubenswrapper[4982]: E0123 07:56:49.828128 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.868124 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:42:10.243341413 +0000 UTC Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.915435 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.915512 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.915532 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.915563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:49 crc kubenswrapper[4982]: I0123 07:56:49.915589 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:49Z","lastTransitionTime":"2026-01-23T07:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.019366 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.019441 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.019467 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.019501 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.019525 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.124671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.124748 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.124767 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.124797 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.124817 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.228615 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.228731 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.228754 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.228789 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.228811 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.332138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.332210 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.332227 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.332251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.332270 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.436148 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.436203 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.436220 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.436243 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.436260 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.539496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.539586 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.539609 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.539684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.539712 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.642825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.642928 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.642951 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.642983 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.643010 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.746358 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.746424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.746449 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.746479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.746503 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.850990 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.851158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.851199 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.851242 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.851287 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.868785 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:43:46.367529346 +0000 UTC Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.955047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.955579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.956162 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.956558 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:50 crc kubenswrapper[4982]: I0123 07:56:50.956851 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:50Z","lastTransitionTime":"2026-01-23T07:56:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.061047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.061126 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.061146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.061177 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.061200 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.164617 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.164709 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.164727 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.164756 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.164774 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.268547 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.268682 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.268715 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.268752 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.268776 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.372002 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.372054 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.372071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.372094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.372112 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.475479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.475563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.475584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.475617 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.475670 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.578800 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.579244 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.579339 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.579438 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.579521 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.684118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.684190 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.684212 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.684241 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.684260 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.788225 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.788316 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.788345 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.788391 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.788417 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.826324 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.826367 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:51 crc kubenswrapper[4982]: E0123 07:56:51.826574 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.826602 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.826691 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:51 crc kubenswrapper[4982]: E0123 07:56:51.826859 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:51 crc kubenswrapper[4982]: E0123 07:56:51.826993 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:51 crc kubenswrapper[4982]: E0123 07:56:51.827111 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.869854 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:08:58.524987492 +0000 UTC Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.891203 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.891264 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.891282 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.891310 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.891335 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.995414 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.995493 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.995515 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.995546 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:51 crc kubenswrapper[4982]: I0123 07:56:51.995566 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:51Z","lastTransitionTime":"2026-01-23T07:56:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.098016 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.098079 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.098096 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.098123 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.098142 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.200987 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.201042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.201052 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.201069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.201080 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.304232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.304275 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.304287 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.304303 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.304314 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.407411 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.407466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.407477 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.407497 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.407510 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.510474 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.510734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.510760 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.510846 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.510871 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.614796 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.614869 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.614889 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.614919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.614943 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.719539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.719596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.719614 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.719674 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.719696 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.823537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.824114 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.824275 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.824433 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.824574 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.871962 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:11:15.813616214 +0000 UTC Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.932107 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.932180 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.932201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.932233 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:52 crc kubenswrapper[4982]: I0123 07:56:52.932254 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:52Z","lastTransitionTime":"2026-01-23T07:56:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.035903 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.035967 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.035980 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.036004 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.036028 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.139095 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.139173 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.139182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.139201 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.139212 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.242528 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.242618 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.242683 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.242717 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.242737 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.353563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.353852 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.353883 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.353932 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.353960 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.456208 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.456245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.456253 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.456267 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.456277 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.565343 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.565463 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.565491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.565532 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.565556 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.669060 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.669181 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.669197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.669241 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.669258 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.773085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.773158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.773182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.773237 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.773271 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.827148 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.827181 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.827252 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.827191 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:53 crc kubenswrapper[4982]: E0123 07:56:53.827922 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:53 crc kubenswrapper[4982]: E0123 07:56:53.828124 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.828554 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 07:56:53 crc kubenswrapper[4982]: E0123 07:56:53.828533 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:53 crc kubenswrapper[4982]: E0123 07:56:53.829189 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:56:53 crc kubenswrapper[4982]: E0123 07:56:53.828970 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.895200 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:56:06.371171387 +0000 UTC Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.897742 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.897822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.897845 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.897876 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:53 crc kubenswrapper[4982]: I0123 07:56:53.897895 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:53Z","lastTransitionTime":"2026-01-23T07:56:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.001740 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.001801 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.001820 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.001850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.001870 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.105400 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.105458 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.105471 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.105493 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.105505 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.208418 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.208487 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.208506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.208537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.208559 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.312242 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.312311 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.312329 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.312355 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.312373 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.416455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.416524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.416542 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.416570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.416596 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.520518 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.520604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.520665 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.520705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.520736 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.624995 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.625071 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.625094 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.625125 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.625148 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.728397 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.728483 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.728507 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.728539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.728560 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.832070 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.832132 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.832444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.832482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.832499 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.896181 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:30:32.2009264 +0000 UTC Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.936296 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.936393 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.936419 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.936455 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:54 crc kubenswrapper[4982]: I0123 07:56:54.936480 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:54Z","lastTransitionTime":"2026-01-23T07:56:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.039610 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.039720 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.039740 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.039768 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.039788 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.143504 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.143573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.143604 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.143662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.143685 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.246997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.247068 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.247088 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.247116 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.247135 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.349921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.349976 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.349986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.350008 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.350021 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.453103 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.453182 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.453207 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.453240 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.453340 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.557596 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.557750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.557778 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.557816 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.557844 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.661497 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.661562 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.661584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.661606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.661660 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.765161 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.765229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.765243 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.765268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.765284 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.827191 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.827276 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.827222 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.827285 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:55 crc kubenswrapper[4982]: E0123 07:56:55.827510 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:55 crc kubenswrapper[4982]: E0123 07:56:55.827676 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:55 crc kubenswrapper[4982]: E0123 07:56:55.827948 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:55 crc kubenswrapper[4982]: E0123 07:56:55.828287 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.851922 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501dce6d-6209-49a1-a8cd-f6c3e0bb1c95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ea6ad6990a1f43f3c8f6c761c6c38dc43ecba1012df4a9af76c765f7a60c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.873212 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.873270 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.873288 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.873314 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.873333 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.888733 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.896920 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:37:05.622056507 +0000 UTC Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.912032 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.937335 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.960084 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.977179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.977246 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.977266 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.977294 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.977316 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:55Z","lastTransitionTime":"2026-01-23T07:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:55 crc kubenswrapper[4982]: I0123 07:56:55.995207 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:41Z\\\",\\\"message\\\":\\\"t{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0123 07:56:40.868264 7002 lb_config.go:1031] Cluster endpoints for openshift-marketplace/certified-operators for network=default are: map[]\\\\nI0123 07:56:40.868291 7002 services_controller.go:443] Built service openshift-marketplace/certified-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.214\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0123 07:56:40.868312 7002 services_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:55Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.018032 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.042037 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.067559 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.081049 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.081105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.081128 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.081165 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.081187 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.088090 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.114418 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.135131 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.154238 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.178067 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.184251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.184533 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.184758 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.184998 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.185229 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.200758 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.222907 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.247224 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.266003 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.284385 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:56Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.289543 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.289611 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.289687 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.289730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.289756 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.393428 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.393506 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.393524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.393555 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.393576 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.498680 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.498791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.498809 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.498843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.498869 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.602461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.602539 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.602560 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.602591 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.602610 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.705924 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.706042 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.706064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.706091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.706108 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.810532 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.810584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.810594 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.810613 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.810644 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.897392 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 20:09:47.07832664 +0000 UTC Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.915027 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.915096 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.915117 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.915148 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:56 crc kubenswrapper[4982]: I0123 07:56:56.915168 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:56Z","lastTransitionTime":"2026-01-23T07:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.019253 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.019314 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.019335 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.019363 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.019409 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.122579 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.122679 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.122708 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.122737 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.122759 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.226843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.227012 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.227045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.227076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.227096 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.330461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.330563 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.330592 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.330675 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.330699 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.434508 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.434587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.434608 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.434666 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.434688 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.538618 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.538728 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.538744 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.538771 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.538791 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.641939 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.642032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.642068 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.642105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.642129 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.746992 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.747086 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.747114 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.747165 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.747185 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.826728 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.826833 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.826776 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:57 crc kubenswrapper[4982]: E0123 07:56:57.827015 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.826742 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:57 crc kubenswrapper[4982]: E0123 07:56:57.827294 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:57 crc kubenswrapper[4982]: E0123 07:56:57.827417 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:57 crc kubenswrapper[4982]: E0123 07:56:57.827538 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.850975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.851064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.851082 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.851112 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.851138 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.897706 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:53:42.516842184 +0000 UTC Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.954600 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.954712 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.954738 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.954774 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:57 crc kubenswrapper[4982]: I0123 07:56:57.954799 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:57Z","lastTransitionTime":"2026-01-23T07:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.058578 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.058670 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.058695 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.058733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.058756 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.163067 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.163138 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.163162 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.163196 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.163223 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.269311 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.269415 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.269434 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.269471 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.269487 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.372523 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.372573 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.372587 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.372608 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.372659 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.475174 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.475236 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.475248 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.475274 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.475288 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.578867 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.578963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.578986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.579023 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.579047 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.683146 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.683231 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.683249 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.683283 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.683303 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.787333 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.787424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.787442 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.787473 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.787493 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.891826 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.892262 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.892281 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.892312 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.892333 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.898343 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 00:11:07.993959119 +0000 UTC Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.996476 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.996566 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.996594 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.996704 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:58 crc kubenswrapper[4982]: I0123 07:56:58.996727 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:58Z","lastTransitionTime":"2026-01-23T07:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.100556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.100676 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.100700 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.100730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.100748 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.205069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.205562 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.205902 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.206154 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.206395 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.310570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.310671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.310690 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.310717 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.310736 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.363848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.364013 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.364043 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.364085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.364118 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.389973 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.395873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.395919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.395931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.395952 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.395963 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.413141 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.418253 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.418356 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.418373 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.418424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.418449 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.440022 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.445984 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.446069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.446084 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.446105 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.446120 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.466210 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.486378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.486436 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.486454 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.486482 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.486500 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.512551 4982 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"da89adcc-d5b1-4734-a8fa-b65ffc69cee5\\\",\\\"systemUUID\\\":\\\"9454035b-7737-4bff-9cbb-fb77ac30a74d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:56:59Z is after 2025-08-24T17:21:41Z" Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.512933 4982 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.515786 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.515859 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.515878 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.515907 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.515927 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.619843 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.619890 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.619903 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.619922 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.619934 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.723594 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.723671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.723683 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.723701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.723715 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826300 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826317 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826431 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.826545 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826575 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.826740 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826887 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826937 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826960 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.826993 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.826959 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.827014 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:56:59 crc kubenswrapper[4982]: E0123 07:56:59.827069 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.899479 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:08:22.021996912 +0000 UTC Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.929860 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.929917 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.929942 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.929966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:56:59 crc kubenswrapper[4982]: I0123 07:56:59.929984 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:56:59Z","lastTransitionTime":"2026-01-23T07:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.033660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.033747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.033766 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.033794 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.033813 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.137045 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.137133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.137158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.137192 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.137211 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.240122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.240197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.240222 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.240253 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.240276 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.343521 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.343682 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.343701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.343729 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.343747 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.446968 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.447044 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.447062 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.447091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.447116 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.549896 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.549947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.549963 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.549985 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.550002 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.653901 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.653962 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.653976 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.653997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.654012 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.757733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.757807 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.757831 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.757868 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.757892 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.860991 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.861069 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.861088 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.861122 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.861141 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.900131 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:55:19.720909956 +0000 UTC Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.964589 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.964683 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.964703 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.964730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:00 crc kubenswrapper[4982]: I0123 07:57:00.964750 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:00Z","lastTransitionTime":"2026-01-23T07:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.067133 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.067179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.067190 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.067209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.067221 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.169835 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.169895 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.169913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.169937 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.169956 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.277232 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.277308 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.277321 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.277338 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.277373 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.381577 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.381685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.381705 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.381733 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.381751 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.484411 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.484454 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.484464 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.484515 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.484530 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.587730 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.587848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.587869 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.587897 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.587917 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.691297 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.691441 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.691459 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.691485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.691504 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.795104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.795203 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.795223 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.795257 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.795284 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.826279 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.826363 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.826375 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:01 crc kubenswrapper[4982]: E0123 07:57:01.826531 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.826610 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:01 crc kubenswrapper[4982]: E0123 07:57:01.826730 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:01 crc kubenswrapper[4982]: E0123 07:57:01.826855 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:01 crc kubenswrapper[4982]: E0123 07:57:01.827035 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.898584 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.898724 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.898751 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.898780 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.898797 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:01Z","lastTransitionTime":"2026-01-23T07:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:01 crc kubenswrapper[4982]: I0123 07:57:01.900896 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:28:59.532382971 +0000 UTC Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.001858 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.001930 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.001950 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.001977 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.001995 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.105791 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.105873 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.105894 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.105926 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.105945 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.209728 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.209775 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.209787 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.209803 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.209817 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.312443 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.312509 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.312524 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.312541 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.312553 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.416341 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.416423 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.416446 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.416474 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.416495 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.520210 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.520245 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.520278 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.520291 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.520302 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.623666 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.623725 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.623742 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.623764 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.623783 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.727667 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.727749 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.727772 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.727802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.727822 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.831237 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.831304 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.831321 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.831347 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.831364 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.901158 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:57:09.711177044 +0000 UTC Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.935079 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.935148 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.935165 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.935191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:02 crc kubenswrapper[4982]: I0123 07:57:02.935209 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:02Z","lastTransitionTime":"2026-01-23T07:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.038218 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.038276 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.038297 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.038327 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.038348 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.140687 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.140723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.140734 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.140750 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.140762 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.243709 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.243785 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.243808 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.243839 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.243861 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.346516 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.346545 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.346556 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.346570 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.346579 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.448891 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.448950 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.448966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.448991 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.449012 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.551699 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.551739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.551747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.551761 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.551770 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.654271 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.654335 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.654395 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.654424 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.654478 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.757017 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.757086 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.757104 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.757136 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.757155 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.827254 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.827442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.827484 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.827502 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:03 crc kubenswrapper[4982]: E0123 07:57:03.827614 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:03 crc kubenswrapper[4982]: E0123 07:57:03.827890 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:03 crc kubenswrapper[4982]: E0123 07:57:03.827949 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:03 crc kubenswrapper[4982]: E0123 07:57:03.828086 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.860386 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.860466 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.860491 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.860526 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.860548 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.886329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:03 crc kubenswrapper[4982]: E0123 07:57:03.886573 4982 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:57:03 crc kubenswrapper[4982]: E0123 07:57:03.886732 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs podName:0efe4bf7-1550-4885-a344-d045b3a1c653 nodeName:}" failed. No retries permitted until 2026-01-23 07:58:07.886701236 +0000 UTC m=+162.367064662 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs") pod "network-metrics-daemon-xxghm" (UID: "0efe4bf7-1550-4885-a344-d045b3a1c653") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.902002 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:15:30.734885542 +0000 UTC Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.963701 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.963744 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.963838 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.963908 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:03 crc kubenswrapper[4982]: I0123 07:57:03.963926 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:03Z","lastTransitionTime":"2026-01-23T07:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.066739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.066802 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.066819 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.066864 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.066882 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.170091 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.170155 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.170172 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.170197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.170214 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.273227 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.273292 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.273310 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.273336 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.273353 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.376583 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.376661 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.376671 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.376685 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.376697 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.479606 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.479714 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.479739 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.479768 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.479787 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.582342 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.582403 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.582420 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.582443 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.582461 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.686313 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.686395 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.686413 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.686443 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.686461 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.789997 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.790093 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.790117 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.790148 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.790171 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.893387 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.893445 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.893463 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.893489 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.893506 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.902887 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:55:37.770247614 +0000 UTC Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.996711 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.996782 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.996798 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.996826 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:04 crc kubenswrapper[4982]: I0123 07:57:04.996845 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:04Z","lastTransitionTime":"2026-01-23T07:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.100118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.100209 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.100226 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.100252 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.100271 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.203735 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.203804 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.203827 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.203859 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.203885 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.307425 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.307507 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.307526 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.307557 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.307581 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.410460 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.410529 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.410550 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.410577 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.410595 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.513085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.513124 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.513135 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.513150 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.513159 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.615973 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.616020 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.616037 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.616054 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.616066 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.718684 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.718732 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.718743 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.718762 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.718773 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.821432 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.821499 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.821517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.821543 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.821562 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.827080 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.827174 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:05 crc kubenswrapper[4982]: E0123 07:57:05.827240 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:05 crc kubenswrapper[4982]: E0123 07:57:05.827353 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.827404 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:05 crc kubenswrapper[4982]: E0123 07:57:05.827896 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.827984 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:05 crc kubenswrapper[4982]: E0123 07:57:05.828201 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.853071 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5501db4-cca7-46c4-a520-d9697236b48b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0123 07:55:38.657945 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 07:55:38.659493 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1685458200/tls.crt::/tmp/serving-cert-1685458200/tls.key\\\\\\\"\\\\nI0123 07:55:43.916448 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 07:55:43.919429 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 07:55:43.919456 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 07:55:43.919480 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 07:55:43.919485 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 07:55:43.924582 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 07:55:43.924606 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924611 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 07:55:43.924615 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 07:55:43.924621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 07:55:43.924643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 07:55:43.924646 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 07:55:43.924742 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 07:55:43.926247 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.869480 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9060d45b-b8c9-4fc4-aa59-5256c8bca50f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aacabcc06e76f1163717534b69d5bd57681d0225bdaf2f5c50dff7922a2773f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa4786ce6d44d8bbd6b960747c44b6bd949fb951096fbed67b7fde290214a75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3884a7d6f5e7594204f82d6d772a98b7079e516e2176391b6855fedd27034075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89c778a2befd511c283252927e32b3f5e43dd123d8845b1035b5d79480e4b494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.887276 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://67a132118c2e33506cdc2d2e6b20201a0d4bc472e0f98d8ed553c30f6251e885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.903524 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:52:33.470438947 +0000 UTC Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.903665 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gfvtq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3e50013-d63d-4e43-924e-74237cd4a690\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:31Z\\\",\\\"message\\\":\\\"2026-01-23T07:55:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986\\\\n2026-01-23T07:55:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_706ec8fb-cf7a-4b52-9e8e-6be5c4ecc986 to /host/opt/cni/bin/\\\\n2026-01-23T07:55:46Z [verbose] multus-daemon started\\\\n2026-01-23T07:55:46Z [verbose] Readiness Indicator file check\\\\n2026-01-23T07:56:31Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:56:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksvln\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gfvtq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.915807 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pqvl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658d846d-029f-4188-91c9-6f797e0ffa52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f405add8ca0c67fa943348a688a7e8774f26dc34832a84cabb06f69d0a2d2be4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-krqrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:48Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pqvl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.928306 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.928375 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.928414 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.928446 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.928469 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:05Z","lastTransitionTime":"2026-01-23T07:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.928527 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f08e29d-accb-44b3-b466-5edf48e58f17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1199b15f0c8222141c55dbc32ac0ab2b14c41a3b9bc3c9aa1f96af94fb5c43c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qgx8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tlww\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.942802 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"501dce6d-6209-49a1-a8cd-f6c3e0bb1c95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23ea6ad6990a1f43f3c8f6c761c6c38dc43ecba1012df4a9af76c765f7a60c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98a47d3c361f028d1990034252316bec6196891f343791ea21e5e800de6cc5c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:05 crc kubenswrapper[4982]: I0123 07:57:05.977968 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4993c35a-3de3-415c-baaa-915343dbfc67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b6a2df889fd59482a2e7c9f5b70e165461e513c741b1543c0908e71ca876b7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e164f34f733f1d481e992624370751c615e828daea39e88bb10bdbd1703affa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2dcb71e219c6de370afc468e992108f5a6118984fd90ea6cd69f9fb1ee7465b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2414ca02cb75bccf4463ee5561b929e55db5a158d4499b01fbfa59745704ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e18f8158dcd7d370561739e88ccf0eb0ce58603c8d5945f06a19d61fcafdda3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d005fbcb041f28960d1ef3956401a493a3dee04d1ffce4b92faa291a0ab7c56a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed9e78a298324f4e219a4421fcdbadf1baee109b71156ca349f3b3933820efa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f362cc883222297f498b686a307f72b1c6d4e28e94a9f4ef41a14bc481d8d2f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.000317 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e37a53fe-a173-4bcd-a701-cda65e5ecc4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44bfe120936b82e298064f2547d1e6bdf9aff06d893e8cf504753a0ad6071bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c32bb0829749ac09a4f096037f2a2b74c03923c7682de94accd25bf1eae5e26\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b1a9c2f25174b812861d09b067818e8664987b02e72adfd4a121f0232864a16\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:05Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.017444 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7c9eba5b09e676c85c5acaf078b7bd9c262928b2f7dbb86f123b6016f50fe37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.032191 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.032268 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.032291 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.032325 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.032349 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.034322 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.069526 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T07:56:41Z\\\",\\\"message\\\":\\\"t{Name:grpc,Protocol:TCP,Port:50051,TargetPort:{0 50051 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{olm.catalogSource: certified-operators,olm.managed: true,},ClusterIP:10.217.5.214,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.214],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0123 07:56:40.868264 7002 lb_config.go:1031] Cluster endpoints for openshift-marketplace/certified-operators for network=default are: map[]\\\\nI0123 07:56:40.868291 7002 services_controller.go:443] Built service openshift-marketplace/certified-operators LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.214\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:50051, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0123 07:56:40.868312 7002 services_\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T07:56:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khrrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwb28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.093125 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.113067 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.135882 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.136032 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.136110 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.136142 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.136163 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.137462 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddf75f8ba036512cb7ef2fc43cdf0a2cc761c17c0aaf558e52fe0eb079b54c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38244d52c182f9b7cec8fc9dcf3eb763ffa46166cd42885cae4d7e03165db717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.153834 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4j96f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2f0e9b7-6608-482e-b3f0-fe6ba7a0bab2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5824cf1c1f7ba0204644dbf6ddfbb35e5379ba4a29f3075eeb16fb1b17901765\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22zjb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4j96f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.177077 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ccl65" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f7f9a3e-f8e7-4da6-bf78-47c9895b7d7c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bb613d199d055cd5580283df4b920cf9a1429db143324884a898942910b2f8f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99b2e9fe5c0910857029e27ed965187e1fdf18db3b59dcf172af7e9f5e4ecd1d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8674c3e3ddcb66e2d6c9b70014ae48328cc5c7ddebe60805ea0407ec6929a6b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6d3230ccbb0bae6fa441c1e172f70af8d82eb96c722148dfa93c370d0d11c81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43e79fe2d5132f51a2cb2b48d2c3878308fe9a1f8a82a62cba29d894153eab11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74e37022a2e996964b25f7fd7f271c0e83f3add7e891b7cf24c0dffbce2bb992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b5c4311818ce3a3d2105329d4f46fa0fb811eebce22c83e9453bb19fe35a926\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T07:55:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T07:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nx5bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:45Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ccl65\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.194177 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bf1104b4-b2b9-44ab-b85f-4f51228d1ad0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:56:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b62c11889cd25beec15a2529661ffb09cbd77f03b330532fd44dd9c97f8476d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8a19480519bf610084b6a54705051f4df027097cdfdb69c3c7f006be1d09df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T07:55:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9798\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:58Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4p58f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.210240 4982 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xxghm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efe4bf7-1550-4885-a344-d045b3a1c653\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T07:55:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g9ng7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T07:55:59Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xxghm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T07:57:06Z is after 2025-08-24T17:21:41Z" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.239273 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.239330 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.239350 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.239378 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.239396 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.342517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.342601 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.342660 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.342692 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.342717 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.445914 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.445996 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.446020 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.446047 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.446065 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.549841 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.549906 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.549924 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.549949 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.549967 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.653481 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.653540 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.653557 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.653581 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.653599 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.756794 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.756886 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.756910 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.756937 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.756960 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.860776 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.860831 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.860850 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.860874 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.860891 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.903841 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:21:39.490869474 +0000 UTC Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.964390 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.964444 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.964461 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.964485 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:06 crc kubenswrapper[4982]: I0123 07:57:06.964504 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:06Z","lastTransitionTime":"2026-01-23T07:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.067118 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.067229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.067251 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.067284 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.067303 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.171116 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.171179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.171197 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.171224 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.171245 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.274813 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.274931 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.274946 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.274966 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.274978 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.378085 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.378229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.378286 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.378317 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.378337 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.481237 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.481309 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.481334 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.481369 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.481393 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.584302 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.584397 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.584419 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.584448 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.584463 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.687855 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.687915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.687932 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.687956 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.687975 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.791436 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.791479 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.791496 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.791517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.791535 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.827146 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.827237 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:07 crc kubenswrapper[4982]: E0123 07:57:07.827366 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.827447 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:07 crc kubenswrapper[4982]: E0123 07:57:07.827731 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.827811 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:07 crc kubenswrapper[4982]: E0123 07:57:07.828090 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:07 crc kubenswrapper[4982]: E0123 07:57:07.828221 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.894194 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.894229 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.894240 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.894260 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.894274 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.904109 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:16:24.995423501 +0000 UTC Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.997760 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.997857 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.997877 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.997920 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:07 crc kubenswrapper[4982]: I0123 07:57:07.997934 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:07Z","lastTransitionTime":"2026-01-23T07:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.100778 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.100865 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.100888 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.100921 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.100940 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.203253 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.203500 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.203517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.203537 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.203551 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.307064 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.307158 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.307179 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.307210 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.307228 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.410620 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.410720 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.410781 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.410975 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.411076 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.514848 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.514902 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.514919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.514944 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.514963 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.618076 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.618135 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.618152 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.618175 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.618194 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.721799 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.721915 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.721945 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.721986 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.722009 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.825859 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.825947 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.825967 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.825994 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.826012 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.827549 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 07:57:08 crc kubenswrapper[4982]: E0123 07:57:08.827826 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwb28_openshift-ovn-kubernetes(6e15f1e6-f3ea-4b15-89c0-e867fd52646e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.904721 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:43:27.930892275 +0000 UTC Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.928919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.928982 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.929012 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.929043 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:08 crc kubenswrapper[4982]: I0123 07:57:08.929067 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:08Z","lastTransitionTime":"2026-01-23T07:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.031747 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.031805 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.031822 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.031845 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.031862 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:09Z","lastTransitionTime":"2026-01-23T07:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.135825 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.135919 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.135938 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.135969 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.135999 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:09Z","lastTransitionTime":"2026-01-23T07:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.240361 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.240494 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.240517 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.240544 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.240562 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:09Z","lastTransitionTime":"2026-01-23T07:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.346662 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.346723 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.346740 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.346766 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.346787 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:09Z","lastTransitionTime":"2026-01-23T07:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.450206 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.450295 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.450317 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.450346 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.450367 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:09Z","lastTransitionTime":"2026-01-23T07:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.532795 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.532913 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.532989 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.533036 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.533112 4982 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T07:57:09Z","lastTransitionTime":"2026-01-23T07:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.612346 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4"] Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.613155 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.617191 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.617495 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.617500 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.617818 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.677232 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.677196054 podStartE2EDuration="1m26.677196054s" podCreationTimestamp="2026-01-23 07:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.646474033 +0000 UTC m=+104.126837479" watchObservedRunningTime="2026-01-23 07:57:09.677196054 +0000 UTC m=+104.157559480" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.704469 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.704430902 podStartE2EDuration="49.704430902s" podCreationTimestamp="2026-01-23 07:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.676981628 +0000 UTC m=+104.157345044" watchObservedRunningTime="2026-01-23 07:57:09.704430902 +0000 UTC m=+104.184794318" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.706744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c004cdf0-d315-4377-a60e-25006f46354e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.706867 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c004cdf0-d315-4377-a60e-25006f46354e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.706997 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c004cdf0-d315-4377-a60e-25006f46354e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.707123 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c004cdf0-d315-4377-a60e-25006f46354e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.707214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c004cdf0-d315-4377-a60e-25006f46354e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.735289 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gfvtq" podStartSLOduration=84.735254626 podStartE2EDuration="1m24.735254626s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.733940361 +0000 UTC m=+104.214303787" watchObservedRunningTime="2026-01-23 07:57:09.735254626 +0000 UTC m=+104.215618062" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.755061 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pqvl7" podStartSLOduration=85.755028194 podStartE2EDuration="1m25.755028194s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.754705666 +0000 UTC m=+104.235069112" watchObservedRunningTime="2026-01-23 07:57:09.755028194 +0000 UTC m=+104.235391620" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.774005 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podStartSLOduration=85.773983731 podStartE2EDuration="1m25.773983731s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.773866908 +0000 UTC m=+104.254230334" watchObservedRunningTime="2026-01-23 07:57:09.773983731 +0000 UTC m=+104.254347147" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.808135 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c004cdf0-d315-4377-a60e-25006f46354e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.808222 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c004cdf0-d315-4377-a60e-25006f46354e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.808257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c004cdf0-d315-4377-a60e-25006f46354e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.808286 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c004cdf0-d315-4377-a60e-25006f46354e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.808289 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c004cdf0-d315-4377-a60e-25006f46354e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.808356 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c004cdf0-d315-4377-a60e-25006f46354e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.808392 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c004cdf0-d315-4377-a60e-25006f46354e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.809574 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c004cdf0-d315-4377-a60e-25006f46354e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.811149 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.811133644 podStartE2EDuration="33.811133644s" podCreationTimestamp="2026-01-23 07:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.81098367 +0000 UTC m=+104.291347096" watchObservedRunningTime="2026-01-23 07:57:09.811133644 +0000 UTC m=+104.291497040" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.819937 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c004cdf0-d315-4377-a60e-25006f46354e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.826716 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.826792 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.826831 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.826725 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:09 crc kubenswrapper[4982]: E0123 07:57:09.826946 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:09 crc kubenswrapper[4982]: E0123 07:57:09.827320 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:09 crc kubenswrapper[4982]: E0123 07:57:09.827566 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:09 crc kubenswrapper[4982]: E0123 07:57:09.827740 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.835228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c004cdf0-d315-4377-a60e-25006f46354e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lxdw4\" (UID: \"c004cdf0-d315-4377-a60e-25006f46354e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.845110 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.845081031 podStartE2EDuration="1m26.845081031s" podCreationTimestamp="2026-01-23 07:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.843888009 +0000 UTC m=+104.324251475" watchObservedRunningTime="2026-01-23 07:57:09.845081031 +0000 UTC m=+104.325444467" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.870225 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.870188502 podStartE2EDuration="1m23.870188502s" podCreationTimestamp="2026-01-23 07:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:09.869292738 +0000 UTC m=+104.349656154" watchObservedRunningTime="2026-01-23 07:57:09.870188502 +0000 UTC m=+104.350551928" Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.905168 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:50:23.826474977 +0000 UTC Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.905292 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.919901 4982 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 07:57:09 crc kubenswrapper[4982]: I0123 07:57:09.940115 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" Jan 23 07:57:10 crc kubenswrapper[4982]: I0123 07:57:10.052576 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4j96f" podStartSLOduration=86.052546546 podStartE2EDuration="1m26.052546546s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:10.051516938 +0000 UTC m=+104.531880344" watchObservedRunningTime="2026-01-23 07:57:10.052546546 +0000 UTC m=+104.532909942" Jan 23 07:57:10 crc kubenswrapper[4982]: I0123 07:57:10.083034 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ccl65" podStartSLOduration=85.08300446 podStartE2EDuration="1m25.08300446s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:10.079440215 +0000 UTC m=+104.559803611" watchObservedRunningTime="2026-01-23 07:57:10.08300446 +0000 UTC m=+104.563367886" Jan 23 07:57:10 crc kubenswrapper[4982]: I0123 07:57:10.097316 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4p58f" podStartSLOduration=85.097289782 podStartE2EDuration="1m25.097289782s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:10.096051508 +0000 UTC m=+104.576414904" watchObservedRunningTime="2026-01-23 07:57:10.097289782 +0000 UTC m=+104.577653198" Jan 23 07:57:10 crc kubenswrapper[4982]: I0123 07:57:10.401934 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" event={"ID":"c004cdf0-d315-4377-a60e-25006f46354e","Type":"ContainerStarted","Data":"bd03d915bb4ada12d98bf10262f9fea9803d403d691bfdf9a00f5b379724ad71"} Jan 23 07:57:10 crc kubenswrapper[4982]: I0123 07:57:10.402003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" event={"ID":"c004cdf0-d315-4377-a60e-25006f46354e","Type":"ContainerStarted","Data":"9d2541b39cc5696c6b6e72b4d24245dd7679c6838cb68d8e3641f33179bbdf77"} Jan 23 07:57:10 crc kubenswrapper[4982]: I0123 07:57:10.427269 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lxdw4" podStartSLOduration=86.427239179 podStartE2EDuration="1m26.427239179s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:10.427200258 +0000 UTC m=+104.907563694" watchObservedRunningTime="2026-01-23 07:57:10.427239179 +0000 UTC m=+104.907602565" Jan 23 07:57:11 crc kubenswrapper[4982]: I0123 07:57:11.826949 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:11 crc kubenswrapper[4982]: I0123 07:57:11.827167 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:11 crc kubenswrapper[4982]: I0123 07:57:11.827122 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:11 crc kubenswrapper[4982]: E0123 07:57:11.827212 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:11 crc kubenswrapper[4982]: E0123 07:57:11.827314 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:11 crc kubenswrapper[4982]: I0123 07:57:11.827345 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:11 crc kubenswrapper[4982]: E0123 07:57:11.827411 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:11 crc kubenswrapper[4982]: E0123 07:57:11.827481 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:13 crc kubenswrapper[4982]: I0123 07:57:13.827258 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:13 crc kubenswrapper[4982]: I0123 07:57:13.827359 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:13 crc kubenswrapper[4982]: I0123 07:57:13.827444 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:13 crc kubenswrapper[4982]: E0123 07:57:13.828724 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:13 crc kubenswrapper[4982]: E0123 07:57:13.828468 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:13 crc kubenswrapper[4982]: I0123 07:57:13.827462 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:13 crc kubenswrapper[4982]: E0123 07:57:13.829004 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:13 crc kubenswrapper[4982]: E0123 07:57:13.829279 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:15 crc kubenswrapper[4982]: I0123 07:57:15.826312 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:15 crc kubenswrapper[4982]: E0123 07:57:15.826736 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:15 crc kubenswrapper[4982]: I0123 07:57:15.827192 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:15 crc kubenswrapper[4982]: I0123 07:57:15.827265 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:15 crc kubenswrapper[4982]: E0123 07:57:15.827959 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:15 crc kubenswrapper[4982]: I0123 07:57:15.828216 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:15 crc kubenswrapper[4982]: E0123 07:57:15.828727 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:15 crc kubenswrapper[4982]: E0123 07:57:15.828585 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:17 crc kubenswrapper[4982]: I0123 07:57:17.826750 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:17 crc kubenswrapper[4982]: I0123 07:57:17.826791 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:17 crc kubenswrapper[4982]: E0123 07:57:17.827925 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:17 crc kubenswrapper[4982]: I0123 07:57:17.826928 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:17 crc kubenswrapper[4982]: I0123 07:57:17.826838 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:17 crc kubenswrapper[4982]: E0123 07:57:17.828099 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:17 crc kubenswrapper[4982]: E0123 07:57:17.828495 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:17 crc kubenswrapper[4982]: E0123 07:57:17.828330 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:18 crc kubenswrapper[4982]: I0123 07:57:18.435393 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/1.log" Jan 23 07:57:18 crc kubenswrapper[4982]: I0123 07:57:18.436030 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/0.log" Jan 23 07:57:18 crc kubenswrapper[4982]: I0123 07:57:18.436145 4982 generic.go:334] "Generic (PLEG): container finished" podID="a3e50013-d63d-4e43-924e-74237cd4a690" containerID="1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b" exitCode=1 Jan 23 07:57:18 crc kubenswrapper[4982]: I0123 07:57:18.436224 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerDied","Data":"1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b"} Jan 23 07:57:18 crc kubenswrapper[4982]: I0123 07:57:18.436343 4982 scope.go:117] "RemoveContainer" containerID="574e0ba872a42d1a3f873b4954639c9e7f50add9ea98cbd835f74ab3c851dab4" Jan 23 07:57:18 crc kubenswrapper[4982]: I0123 07:57:18.436778 4982 scope.go:117] "RemoveContainer" containerID="1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b" Jan 23 07:57:18 crc kubenswrapper[4982]: E0123 07:57:18.436998 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gfvtq_openshift-multus(a3e50013-d63d-4e43-924e-74237cd4a690)\"" pod="openshift-multus/multus-gfvtq" podUID="a3e50013-d63d-4e43-924e-74237cd4a690" Jan 23 07:57:19 crc kubenswrapper[4982]: I0123 07:57:19.441596 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/1.log" Jan 23 07:57:19 crc kubenswrapper[4982]: I0123 07:57:19.827188 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:19 crc kubenswrapper[4982]: I0123 07:57:19.827417 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:19 crc kubenswrapper[4982]: I0123 07:57:19.827461 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:19 crc kubenswrapper[4982]: I0123 07:57:19.827729 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:19 crc kubenswrapper[4982]: E0123 07:57:19.827710 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:19 crc kubenswrapper[4982]: E0123 07:57:19.828007 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:19 crc kubenswrapper[4982]: E0123 07:57:19.828100 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:19 crc kubenswrapper[4982]: E0123 07:57:19.828277 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:21 crc kubenswrapper[4982]: I0123 07:57:21.827126 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:21 crc kubenswrapper[4982]: I0123 07:57:21.827286 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:21 crc kubenswrapper[4982]: I0123 07:57:21.827307 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:21 crc kubenswrapper[4982]: I0123 07:57:21.827381 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:21 crc kubenswrapper[4982]: E0123 07:57:21.828434 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:21 crc kubenswrapper[4982]: E0123 07:57:21.828664 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:21 crc kubenswrapper[4982]: E0123 07:57:21.828906 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:21 crc kubenswrapper[4982]: E0123 07:57:21.829101 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:23 crc kubenswrapper[4982]: I0123 07:57:23.827002 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:23 crc kubenswrapper[4982]: I0123 07:57:23.827095 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:23 crc kubenswrapper[4982]: I0123 07:57:23.827024 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:23 crc kubenswrapper[4982]: E0123 07:57:23.827267 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:23 crc kubenswrapper[4982]: I0123 07:57:23.827349 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:23 crc kubenswrapper[4982]: E0123 07:57:23.827502 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:23 crc kubenswrapper[4982]: E0123 07:57:23.827676 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:23 crc kubenswrapper[4982]: E0123 07:57:23.827812 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:23 crc kubenswrapper[4982]: I0123 07:57:23.829189 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 07:57:24 crc kubenswrapper[4982]: I0123 07:57:24.467965 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/3.log" Jan 23 07:57:24 crc kubenswrapper[4982]: I0123 07:57:24.470953 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerStarted","Data":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} Jan 23 07:57:24 crc kubenswrapper[4982]: I0123 07:57:24.472437 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:57:24 crc kubenswrapper[4982]: I0123 07:57:24.502419 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podStartSLOduration=99.502398889 podStartE2EDuration="1m39.502398889s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:24.5005594 +0000 UTC m=+118.980922796" watchObservedRunningTime="2026-01-23 07:57:24.502398889 +0000 UTC m=+118.982762295" Jan 23 07:57:24 crc kubenswrapper[4982]: I0123 07:57:24.722562 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xxghm"] Jan 23 07:57:24 crc kubenswrapper[4982]: I0123 07:57:24.722789 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:24 crc kubenswrapper[4982]: E0123 07:57:24.722976 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:25 crc kubenswrapper[4982]: E0123 07:57:25.815974 4982 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 07:57:25 crc kubenswrapper[4982]: I0123 07:57:25.826508 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:25 crc kubenswrapper[4982]: I0123 07:57:25.826509 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:25 crc kubenswrapper[4982]: I0123 07:57:25.826509 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:25 crc kubenswrapper[4982]: I0123 07:57:25.826683 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:25 crc kubenswrapper[4982]: E0123 07:57:25.828745 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:25 crc kubenswrapper[4982]: E0123 07:57:25.828900 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:25 crc kubenswrapper[4982]: E0123 07:57:25.829047 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:25 crc kubenswrapper[4982]: E0123 07:57:25.829186 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:26 crc kubenswrapper[4982]: E0123 07:57:26.055960 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 07:57:27 crc kubenswrapper[4982]: I0123 07:57:27.827380 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:27 crc kubenswrapper[4982]: I0123 07:57:27.827478 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:27 crc kubenswrapper[4982]: E0123 07:57:27.827571 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:27 crc kubenswrapper[4982]: I0123 07:57:27.827478 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:27 crc kubenswrapper[4982]: I0123 07:57:27.827738 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:27 crc kubenswrapper[4982]: E0123 07:57:27.827737 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:27 crc kubenswrapper[4982]: E0123 07:57:27.827908 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:27 crc kubenswrapper[4982]: E0123 07:57:27.828060 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:29 crc kubenswrapper[4982]: I0123 07:57:29.826789 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:29 crc kubenswrapper[4982]: I0123 07:57:29.826797 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:29 crc kubenswrapper[4982]: E0123 07:57:29.827300 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:29 crc kubenswrapper[4982]: I0123 07:57:29.826883 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:29 crc kubenswrapper[4982]: I0123 07:57:29.826850 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:29 crc kubenswrapper[4982]: E0123 07:57:29.828151 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:29 crc kubenswrapper[4982]: E0123 07:57:29.828300 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:29 crc kubenswrapper[4982]: E0123 07:57:29.827827 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:31 crc kubenswrapper[4982]: E0123 07:57:31.057340 4982 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 07:57:31 crc kubenswrapper[4982]: I0123 07:57:31.827362 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:31 crc kubenswrapper[4982]: I0123 07:57:31.827451 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:31 crc kubenswrapper[4982]: E0123 07:57:31.827574 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:31 crc kubenswrapper[4982]: I0123 07:57:31.827602 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:31 crc kubenswrapper[4982]: I0123 07:57:31.827365 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:31 crc kubenswrapper[4982]: E0123 07:57:31.827754 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:31 crc kubenswrapper[4982]: E0123 07:57:31.827924 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:31 crc kubenswrapper[4982]: E0123 07:57:31.828091 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:32 crc kubenswrapper[4982]: I0123 07:57:32.827734 4982 scope.go:117] "RemoveContainer" containerID="1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b" Jan 23 07:57:33 crc kubenswrapper[4982]: I0123 07:57:33.506745 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/1.log" Jan 23 07:57:33 crc kubenswrapper[4982]: I0123 07:57:33.506817 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerStarted","Data":"9f750412f1b719e6e62b43d562b2fabbbf5cf8ac3de13de29735a7bf9167655c"} Jan 23 07:57:33 crc kubenswrapper[4982]: I0123 07:57:33.826738 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:33 crc kubenswrapper[4982]: I0123 07:57:33.826830 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:33 crc kubenswrapper[4982]: I0123 07:57:33.826740 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:33 crc kubenswrapper[4982]: E0123 07:57:33.826976 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:33 crc kubenswrapper[4982]: E0123 07:57:33.827060 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:33 crc kubenswrapper[4982]: E0123 07:57:33.827237 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:33 crc kubenswrapper[4982]: I0123 07:57:33.827786 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:33 crc kubenswrapper[4982]: E0123 07:57:33.828029 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:35 crc kubenswrapper[4982]: I0123 07:57:35.827214 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:35 crc kubenswrapper[4982]: I0123 07:57:35.827245 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:35 crc kubenswrapper[4982]: I0123 07:57:35.827316 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:35 crc kubenswrapper[4982]: I0123 07:57:35.827313 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:35 crc kubenswrapper[4982]: E0123 07:57:35.829229 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 07:57:35 crc kubenswrapper[4982]: E0123 07:57:35.829365 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xxghm" podUID="0efe4bf7-1550-4885-a344-d045b3a1c653" Jan 23 07:57:35 crc kubenswrapper[4982]: E0123 07:57:35.829481 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 07:57:35 crc kubenswrapper[4982]: E0123 07:57:35.829668 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.827364 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.827487 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.827490 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.827805 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.834838 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.840384 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.840493 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.840805 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.840928 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 07:57:37 crc kubenswrapper[4982]: I0123 07:57:37.840297 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.584180 4982 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.636471 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-n4294"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.637470 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nc4fw"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.637935 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrg76"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.638465 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.638675 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.639059 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.640460 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.641770 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.642464 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.651765 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.651841 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.652267 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.652559 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.652885 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.652986 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.653262 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.653531 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.653768 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.654108 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.654278 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.654376 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.654566 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.654791 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.654967 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.655224 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.664516 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.664968 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.665291 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.666024 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.666434 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.666863 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.667182 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.667671 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.684238 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x9vz"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.685331 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.686127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.686918 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgf6m"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.686946 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.707574 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.710495 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.710495 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.712318 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.712494 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.712677 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.712882 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.713050 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.713434 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.713737 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.713771 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.713904 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.714723 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jzgnr"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.715396 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.715976 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.716105 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.716460 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.716619 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jzgnr" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.716855 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.720465 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.725685 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vtfwd"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.726271 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.727143 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.727505 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.727934 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.728759 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.729457 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.730164 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.738217 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.741082 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6c2x9"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.740470 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.740841 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.743418 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.743829 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.744153 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.744254 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.744290 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747166 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747304 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747428 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747564 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747618 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747787 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747880 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.747928 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748011 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748042 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748056 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748156 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748179 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748192 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748280 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748296 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748391 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748410 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748417 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748518 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748525 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748552 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748606 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748643 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.748891 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.752163 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.752353 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.753017 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.753304 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.753442 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.753593 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.753666 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.753761 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.753990 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.754062 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.754205 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.754476 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.754531 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.756810 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.756808 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.757600 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.762166 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.774816 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.776096 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.777019 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5qgb4"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.777774 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.778196 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.778563 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.788089 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.788643 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.789832 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.790873 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.797872 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.798095 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.804022 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.811689 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nc4fw"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.813823 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-config\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814136 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-serving-cert\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814233 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814329 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-machine-approver-tls\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814437 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkc5\" (UniqueName: \"kubernetes.io/projected/2b188236-85a4-405c-9651-ba4b15dc4956-kube-api-access-2pkc5\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814533 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfwl\" (UniqueName: \"kubernetes.io/projected/352ea8f0-9969-4a48-9b37-c77819f1473a-kube-api-access-4qfwl\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814605 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-serving-cert\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-node-pullsecrets\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814797 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814878 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814963 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-config\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.815046 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.814360 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qdpxs"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.815167 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-image-import-ca\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.815898 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.815931 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816001 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b188236-85a4-405c-9651-ba4b15dc4956-serving-cert\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816034 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-client-ca\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816056 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wwhk\" (UniqueName: \"kubernetes.io/projected/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-kube-api-access-5wwhk\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816080 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f80f80a-64b1-4eba-afd6-1ed4ec11d026-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7dddz\" (UID: \"0f80f80a-64b1-4eba-afd6-1ed4ec11d026\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816105 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816408 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57bc3220-bc9e-44ac-89a7-67b8996394f0-serving-cert\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816437 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-audit\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816460 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhj6h\" (UniqueName: \"kubernetes.io/projected/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-kube-api-access-xhj6h\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816518 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816543 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816563 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816588 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/352ea8f0-9969-4a48-9b37-c77819f1473a-serving-cert\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816610 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-auth-proxy-config\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816649 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-config\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816691 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6pb6\" (UniqueName: \"kubernetes.io/projected/458b850a-fd8f-4b10-b504-76f0db87d7ad-kube-api-access-m6pb6\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816729 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-config\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816756 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816768 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-encryption-config\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.816822 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819292 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819390 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-policies\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819415 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-dir\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819443 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsm26\" (UniqueName: \"kubernetes.io/projected/57bc3220-bc9e-44ac-89a7-67b8996394f0-kube-api-access-xsm26\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819465 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819490 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk8px\" (UniqueName: \"kubernetes.io/projected/0f80f80a-64b1-4eba-afd6-1ed4ec11d026-kube-api-access-nk8px\") pod \"cluster-samples-operator-665b6dd947-7dddz\" (UID: \"0f80f80a-64b1-4eba-afd6-1ed4ec11d026\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819509 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819572 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-etcd-client\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-audit-dir\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819615 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/352ea8f0-9969-4a48-9b37-c77819f1473a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819651 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-client-ca\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819676 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgp2p\" (UniqueName: \"kubernetes.io/projected/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-kube-api-access-lgp2p\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.819697 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-config\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.820187 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-sw7s7"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.820817 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.821066 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.823280 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.823593 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.824218 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.828079 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5z2hc"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.829371 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.830578 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.830771 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.830881 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831014 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.830835 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831472 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831678 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831729 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.832025 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831753 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831800 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831804 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831852 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831860 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831847 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.831876 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.835379 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.836421 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.837548 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.838600 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.839690 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.840353 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.840754 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.840950 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.842603 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.843456 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.843930 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pqvqq"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.844309 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.845146 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-htgd9"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.845731 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.846485 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rvcj"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.846937 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.847664 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.848054 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.848866 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.849265 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.850120 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.851352 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.855286 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.855891 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.856253 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.856423 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.858588 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cxmqc"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.859945 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.860793 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.863453 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrg76"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.865732 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.867957 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.870524 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.873898 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.880257 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2zsl7"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.881433 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.881589 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.882346 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.883441 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.890721 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5z2hc"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.894224 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jzgnr"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.896140 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.897598 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x9vz"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.898847 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.901254 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6c2x9"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.901272 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.903052 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.904314 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgf6m"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.905470 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vtfwd"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.906915 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.909143 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5qgb4"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.912010 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-htgd9"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.915141 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.916669 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-594h7"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.918069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-594h7" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.918252 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-r2zpk"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.919192 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.919467 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920463 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-encryption-config\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920515 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920544 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920571 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d580f9-7993-4027-804a-0a1dcc4e291c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920597 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-policies\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920619 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-dir\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920661 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920666 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsm26\" (UniqueName: \"kubernetes.io/projected/57bc3220-bc9e-44ac-89a7-67b8996394f0-kube-api-access-xsm26\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920736 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920788 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk8px\" (UniqueName: \"kubernetes.io/projected/0f80f80a-64b1-4eba-afd6-1ed4ec11d026-kube-api-access-nk8px\") pod \"cluster-samples-operator-665b6dd947-7dddz\" (UID: \"0f80f80a-64b1-4eba-afd6-1ed4ec11d026\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920818 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920892 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-etcd-client\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920915 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-audit-dir\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920939 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6gk\" (UniqueName: \"kubernetes.io/projected/e4d580f9-7993-4027-804a-0a1dcc4e291c-kube-api-access-rg6gk\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920964 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-client-ca\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.920984 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgp2p\" (UniqueName: \"kubernetes.io/projected/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-kube-api-access-lgp2p\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921025 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/352ea8f0-9969-4a48-9b37-c77819f1473a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921050 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-config\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921069 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pkc5\" (UniqueName: \"kubernetes.io/projected/2b188236-85a4-405c-9651-ba4b15dc4956-kube-api-access-2pkc5\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921093 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-config\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-serving-cert\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921131 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921149 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-machine-approver-tls\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921177 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-serving-cert\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921202 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qfwl\" (UniqueName: \"kubernetes.io/projected/352ea8f0-9969-4a48-9b37-c77819f1473a-kube-api-access-4qfwl\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-node-pullsecrets\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921248 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921266 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921299 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-config\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921318 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921368 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-image-import-ca\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921385 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921403 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921420 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b188236-85a4-405c-9651-ba4b15dc4956-serving-cert\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921457 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d580f9-7993-4027-804a-0a1dcc4e291c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921482 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wwhk\" (UniqueName: \"kubernetes.io/projected/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-kube-api-access-5wwhk\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921499 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-client-ca\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921529 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f80f80a-64b1-4eba-afd6-1ed4ec11d026-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7dddz\" (UID: \"0f80f80a-64b1-4eba-afd6-1ed4ec11d026\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921547 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921563 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921564 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-audit\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921656 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhj6h\" (UniqueName: \"kubernetes.io/projected/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-kube-api-access-xhj6h\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921679 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57bc3220-bc9e-44ac-89a7-67b8996394f0-serving-cert\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921702 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921758 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921779 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921801 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-auth-proxy-config\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921821 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/352ea8f0-9969-4a48-9b37-c77819f1473a-serving-cert\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921846 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-config\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921861 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-policies\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921934 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-dir\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.921953 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6pb6\" (UniqueName: \"kubernetes.io/projected/458b850a-fd8f-4b10-b504-76f0db87d7ad-kube-api-access-m6pb6\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.922131 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-config\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.922348 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-audit\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.922571 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-node-pullsecrets\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.922942 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-config\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.923011 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.923138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.923329 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.923592 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/352ea8f0-9969-4a48-9b37-c77819f1473a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.922426 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.925137 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-auth-proxy-config\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.925310 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.925373 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-594h7"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.925660 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-config\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.926563 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.926864 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.927216 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.927460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.927797 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-image-import-ca\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.927911 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.928076 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-audit-dir\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.928516 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57bc3220-bc9e-44ac-89a7-67b8996394f0-config\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.928569 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-client-ca\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.928726 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.929200 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-config\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.929262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-serving-cert\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.929351 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-config\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.929568 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-client-ca\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.929683 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.929686 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pqvqq"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.929841 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/352ea8f0-9969-4a48-9b37-c77819f1473a-serving-cert\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.930478 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57bc3220-bc9e-44ac-89a7-67b8996394f0-serving-cert\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.930482 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-etcd-client\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.930556 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-encryption-config\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.930611 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.930998 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-serving-cert\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.931063 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.931415 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.931518 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b188236-85a4-405c-9651-ba4b15dc4956-serving-cert\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.931588 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.932167 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.932310 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.932480 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.932498 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rvcj"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.932885 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-machine-approver-tls\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.933338 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.933659 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f80f80a-64b1-4eba-afd6-1ed4ec11d026-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7dddz\" (UID: \"0f80f80a-64b1-4eba-afd6-1ed4ec11d026\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.933731 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.934749 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.935747 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cxmqc"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.936805 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qdpxs"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.937839 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.938854 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r2zpk"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.939916 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.942142 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.943215 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bnwv8"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.944299 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bnwv8"] Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.944419 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.960907 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 07:57:40 crc kubenswrapper[4982]: I0123 07:57:40.980789 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.000470 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.021120 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.022901 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d580f9-7993-4027-804a-0a1dcc4e291c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.023031 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6gk\" (UniqueName: \"kubernetes.io/projected/e4d580f9-7993-4027-804a-0a1dcc4e291c-kube-api-access-rg6gk\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.023163 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d580f9-7993-4027-804a-0a1dcc4e291c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.024011 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d580f9-7993-4027-804a-0a1dcc4e291c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.026794 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d580f9-7993-4027-804a-0a1dcc4e291c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.046289 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.061136 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.081709 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.101085 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.121389 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.141835 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.182536 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.201212 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.221070 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.242316 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.261447 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.290172 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.301089 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.320467 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.341428 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.361565 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.380559 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.401998 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.422229 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.441531 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.462978 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.482116 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.501378 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.521858 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.541521 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.561390 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.651455 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.654113 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.655076 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.655241 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.660915 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.681013 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.702462 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.720763 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.742090 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.761424 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.782021 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.800973 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.821933 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.841323 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.859532 4982 request.go:700] Waited for 1.015040953s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.862531 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.882236 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.902288 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.921414 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.942799 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.961764 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 07:57:41 crc kubenswrapper[4982]: I0123 07:57:41.982134 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.001364 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.021372 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.046788 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.062078 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.081823 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.101150 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.132285 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.141482 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.160711 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.181095 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.201705 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.220909 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.240439 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.262152 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.281609 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.301366 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.321554 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.341161 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.360507 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.381672 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.402071 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.422129 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.441455 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.461912 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.481523 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.501230 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.521839 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.541282 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.562185 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.583258 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.601036 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.621461 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.673277 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsm26\" (UniqueName: \"kubernetes.io/projected/57bc3220-bc9e-44ac-89a7-67b8996394f0-kube-api-access-xsm26\") pod \"authentication-operator-69f744f599-4x9vz\" (UID: \"57bc3220-bc9e-44ac-89a7-67b8996394f0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.690521 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qfwl\" (UniqueName: \"kubernetes.io/projected/352ea8f0-9969-4a48-9b37-c77819f1473a-kube-api-access-4qfwl\") pod \"openshift-config-operator-7777fb866f-lc9z5\" (UID: \"352ea8f0-9969-4a48-9b37-c77819f1473a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.707496 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgp2p\" (UniqueName: \"kubernetes.io/projected/86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c-kube-api-access-lgp2p\") pod \"machine-approver-56656f9798-n4294\" (UID: \"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.729993 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhj6h\" (UniqueName: \"kubernetes.io/projected/302fd8a4-7ced-4585-b5f7-15fa67d0d01e-kube-api-access-xhj6h\") pod \"apiserver-76f77b778f-hrg76\" (UID: \"302fd8a4-7ced-4585-b5f7-15fa67d0d01e\") " pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.746678 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6pb6\" (UniqueName: \"kubernetes.io/projected/458b850a-fd8f-4b10-b504-76f0db87d7ad-kube-api-access-m6pb6\") pod \"oauth-openshift-558db77b4-vgf6m\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.772113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wwhk\" (UniqueName: \"kubernetes.io/projected/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-kube-api-access-5wwhk\") pod \"controller-manager-879f6c89f-nc4fw\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.789043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk8px\" (UniqueName: \"kubernetes.io/projected/0f80f80a-64b1-4eba-afd6-1ed4ec11d026-kube-api-access-nk8px\") pod \"cluster-samples-operator-665b6dd947-7dddz\" (UID: \"0f80f80a-64b1-4eba-afd6-1ed4ec11d026\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.799186 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.818385 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.821752 4982 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.823714 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pkc5\" (UniqueName: \"kubernetes.io/projected/2b188236-85a4-405c-9651-ba4b15dc4956-kube-api-access-2pkc5\") pod \"route-controller-manager-6576b87f9c-lrw5k\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.841560 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.843382 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.851940 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.859790 4982 request.go:700] Waited for 1.91505355s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.862443 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.897699 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.908127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.908844 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6gk\" (UniqueName: \"kubernetes.io/projected/e4d580f9-7993-4027-804a-0a1dcc4e291c-kube-api-access-rg6gk\") pod \"openshift-controller-manager-operator-756b6f6bc6-txk5n\" (UID: \"e4d580f9-7993-4027-804a-0a1dcc4e291c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.936119 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.956260 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.969694 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/269ea902-7c9c-4223-acbe-31b0c705ef37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970147 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee2e39ac-1eb4-4518-83cb-9331803431ca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970193 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-ca\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970212 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-encryption-config\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970311 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db4eb27-d67a-483f-8813-3921956ad64d-audit-dir\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970331 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/474d5614-be8d-4336-ab5d-6b694e219975-proxy-tls\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970525 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-trusted-ca\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970568 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-service-ca\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.970986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-bound-sa-token\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.971156 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.971461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-config\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.971570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.971706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhnvf\" (UniqueName: \"kubernetes.io/projected/8db4eb27-d67a-483f-8813-3921956ad64d-kube-api-access-mhnvf\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972126 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-certificates\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972158 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8g8d\" (UniqueName: \"kubernetes.io/projected/fc403e93-13e4-4220-9685-e3e23dd5ab3a-kube-api-access-q8g8d\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-config\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972398 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-client\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972417 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-etcd-client\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972453 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/261667fa-341d-4eb6-a5db-14064cc71663-config\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972503 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc403e93-13e4-4220-9685-e3e23dd5ab3a-serving-cert\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972521 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972797 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc403e93-13e4-4220-9685-e3e23dd5ab3a-config\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972877 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6x7\" (UniqueName: \"kubernetes.io/projected/474d5614-be8d-4336-ab5d-6b694e219975-kube-api-access-bf6x7\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972932 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.972972 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjb6n\" (UniqueName: \"kubernetes.io/projected/5d6acb07-307a-4aa2-b316-c00e07ea41d9-kube-api-access-sjb6n\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee2e39ac-1eb4-4518-83cb-9331803431ca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:42 crc kubenswrapper[4982]: E0123 07:57:42.973371 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.473346359 +0000 UTC m=+137.953709755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973410 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973471 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/261667fa-341d-4eb6-a5db-14064cc71663-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973501 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973536 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-trusted-ca\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973557 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2e39ac-1eb4-4518-83cb-9331803431ca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973599 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bf1c62-20c1-4f84-b5ec-7756d496ae64-serving-cert\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973756 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-metrics-tls\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.973835 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc403e93-13e4-4220-9685-e3e23dd5ab3a-trusted-ca\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974006 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974115 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6acb07-307a-4aa2-b316-c00e07ea41d9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974142 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/474d5614-be8d-4336-ab5d-6b694e219975-images\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974181 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xn6\" (UniqueName: \"kubernetes.io/projected/81bf1c62-20c1-4f84-b5ec-7756d496ae64-kube-api-access-c4xn6\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974241 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-tls\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974295 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-serving-cert\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974521 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6acb07-307a-4aa2-b316-c00e07ea41d9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974544 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/261667fa-341d-4eb6-a5db-14064cc71663-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974602 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974664 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xww6c\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-kube-api-access-xww6c\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974684 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/269ea902-7c9c-4223-acbe-31b0c705ef37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974702 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6zmj\" (UniqueName: \"kubernetes.io/projected/269ea902-7c9c-4223-acbe-31b0c705ef37-kube-api-access-g6zmj\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-audit-policies\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974767 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/269ea902-7c9c-4223-acbe-31b0c705ef37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974861 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms77v\" (UniqueName: \"kubernetes.io/projected/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-kube-api-access-ms77v\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974929 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzthf\" (UniqueName: \"kubernetes.io/projected/22a8f798-f137-4e9b-b1ba-09f8a84179be-kube-api-access-nzthf\") pod \"downloads-7954f5f757-jzgnr\" (UID: \"22a8f798-f137-4e9b-b1ba-09f8a84179be\") " pod="openshift-console/downloads-7954f5f757-jzgnr" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.974985 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/474d5614-be8d-4336-ab5d-6b694e219975-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:42 crc kubenswrapper[4982]: I0123 07:57:42.993152 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077483 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.077725 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.577690409 +0000 UTC m=+138.058053795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077791 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk5nt\" (UniqueName: \"kubernetes.io/projected/e2a34d4c-ce3b-47ce-929d-c366188eca0a-kube-api-access-jk5nt\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077832 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdbf5600-4c0e-4816-94d5-b4960e3823e6-certs\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077864 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-webhook-cert\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077888 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjt8n\" (UniqueName: \"kubernetes.io/projected/4b63c011-53bb-4dce-8db4-849770a043ba-kube-api-access-tjt8n\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077923 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-plugins-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh78d\" (UniqueName: \"kubernetes.io/projected/2b3140c2-1f0b-44ac-ac93-b5862802c60b-kube-api-access-nh78d\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077974 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-tmpfs\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.077999 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2a34d4c-ce3b-47ce-929d-c366188eca0a-images\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078093 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-client\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-oauth-config\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/261667fa-341d-4eb6-a5db-14064cc71663-config\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078174 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-etcd-client\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078198 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxhc\" (UniqueName: \"kubernetes.io/projected/c32c238b-fa91-4017-b566-10cece74d21a-kube-api-access-nmxhc\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078227 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldqt\" (UniqueName: \"kubernetes.io/projected/c6073517-08b4-4946-b5e7-21aaaae62de3-kube-api-access-lldqt\") pod \"ingress-canary-r2zpk\" (UID: \"c6073517-08b4-4946-b5e7-21aaaae62de3\") " pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078256 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc403e93-13e4-4220-9685-e3e23dd5ab3a-serving-cert\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27587190-e124-4217-94c7-13a44045143d-profile-collector-cert\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078312 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc403e93-13e4-4220-9685-e3e23dd5ab3a-config\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078334 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-console-config\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078364 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6x7\" (UniqueName: \"kubernetes.io/projected/474d5614-be8d-4336-ab5d-6b694e219975-kube-api-access-bf6x7\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078396 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2q98\" (UniqueName: \"kubernetes.io/projected/dabf56b7-6888-4d13-ad8d-d4d69016d37d-kube-api-access-k2q98\") pod \"multus-admission-controller-857f4d67dd-5z2hc\" (UID: \"dabf56b7-6888-4d13-ad8d-d4d69016d37d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078423 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjb6n\" (UniqueName: \"kubernetes.io/projected/5d6acb07-307a-4aa2-b316-c00e07ea41d9-kube-api-access-sjb6n\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078449 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-signing-cabundle\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078483 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078522 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078550 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wpp5\" (UniqueName: \"kubernetes.io/projected/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-kube-api-access-7wpp5\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078578 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/261667fa-341d-4eb6-a5db-14064cc71663-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078603 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2e39ac-1eb4-4518-83cb-9331803431ca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078654 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bf1c62-20c1-4f84-b5ec-7756d496ae64-serving-cert\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078683 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-metrics-tls\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078708 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7a6af3-b71b-445f-81e6-4b47252f128d-config\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078731 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsf5z\" (UniqueName: \"kubernetes.io/projected/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-kube-api-access-wsf5z\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078781 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c3f6594-0f93-41e5-89c4-96cc33522f7f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpksx\" (UID: \"6c3f6594-0f93-41e5-89c4-96cc33522f7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078809 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfd2\" (UniqueName: \"kubernetes.io/projected/6c3f6594-0f93-41e5-89c4-96cc33522f7f-kube-api-access-drfd2\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpksx\" (UID: \"6c3f6594-0f93-41e5-89c4-96cc33522f7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078837 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6acb07-307a-4aa2-b316-c00e07ea41d9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078861 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7a6af3-b71b-445f-81e6-4b47252f128d-serving-cert\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078884 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-default-certificate\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078910 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b7f7288-854e-407f-957d-7c4b0b0e4a46-metrics-tls\") pod \"dns-operator-744455d44c-cxmqc\" (UID: \"6b7f7288-854e-407f-957d-7c4b0b0e4a46\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078938 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4963328d-9499-41bb-be08-0160ebc41b78-proxy-tls\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078964 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-serving-cert\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.078989 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7bk\" (UniqueName: \"kubernetes.io/projected/3d7b53f4-cc2b-4572-afee-60441a232155-kube-api-access-md7bk\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079027 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6acb07-307a-4aa2-b316-c00e07ea41d9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079062 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/269ea902-7c9c-4223-acbe-31b0c705ef37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079088 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6zmj\" (UniqueName: \"kubernetes.io/projected/269ea902-7c9c-4223-acbe-31b0c705ef37-kube-api-access-g6zmj\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079113 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa160e32-0794-4750-b720-d8b554208eb2-metrics-tls\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079146 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm2vg\" (UniqueName: \"kubernetes.io/projected/6b7f7288-854e-407f-957d-7c4b0b0e4a46-kube-api-access-xm2vg\") pod \"dns-operator-744455d44c-cxmqc\" (UID: \"6b7f7288-854e-407f-957d-7c4b0b0e4a46\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079182 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079208 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms77v\" (UniqueName: \"kubernetes.io/projected/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-kube-api-access-ms77v\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079242 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzthf\" (UniqueName: \"kubernetes.io/projected/22a8f798-f137-4e9b-b1ba-09f8a84179be-kube-api-access-nzthf\") pod \"downloads-7954f5f757-jzgnr\" (UID: \"22a8f798-f137-4e9b-b1ba-09f8a84179be\") " pod="openshift-console/downloads-7954f5f757-jzgnr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079273 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451eebf-43c1-46a4-bee2-24ce97bc0472-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079299 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-ca\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079359 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079385 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d7b53f4-cc2b-4572-afee-60441a232155-config-volume\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079410 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-oauth-serving-cert\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079435 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2b3140c2-1f0b-44ac-ac93-b5862802c60b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079470 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-trusted-ca\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btq4\" (UniqueName: \"kubernetes.io/projected/aa160e32-0794-4750-b720-d8b554208eb2-kube-api-access-4btq4\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079543 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-bound-sa-token\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079595 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-trusted-ca-bundle\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079653 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079678 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8g8d\" (UniqueName: \"kubernetes.io/projected/fc403e93-13e4-4220-9685-e3e23dd5ab3a-kube-api-access-q8g8d\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c32c238b-fa91-4017-b566-10cece74d21a-service-ca-bundle\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079738 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-certificates\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079790 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-config\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mnqf\" (UniqueName: \"kubernetes.io/projected/27587190-e124-4217-94c7-13a44045143d-kube-api-access-7mnqf\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079860 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079888 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6073517-08b4-4946-b5e7-21aaaae62de3-cert\") pod \"ingress-canary-r2zpk\" (UID: \"c6073517-08b4-4946-b5e7-21aaaae62de3\") " pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dfc137d-f565-4d31-8714-2d2343744701-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pc68l\" (UID: \"8dfc137d-f565-4d31-8714-2d2343744701\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079944 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx57s\" (UniqueName: \"kubernetes.io/projected/bd7a6af3-b71b-445f-81e6-4b47252f128d-kube-api-access-lx57s\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.079970 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2b3140c2-1f0b-44ac-ac93-b5862802c60b-srv-cert\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080009 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7451eebf-43c1-46a4-bee2-24ce97bc0472-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080038 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee2e39ac-1eb4-4518-83cb-9331803431ca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080068 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dabf56b7-6888-4d13-ad8d-d4d69016d37d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5z2hc\" (UID: \"dabf56b7-6888-4d13-ad8d-d4d69016d37d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080095 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzztr\" (UniqueName: \"kubernetes.io/projected/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-kube-api-access-mzztr\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080121 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080148 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-mountpoint-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080181 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-trusted-ca\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc403e93-13e4-4220-9685-e3e23dd5ab3a-trusted-ca\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080245 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080274 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/474d5614-be8d-4336-ab5d-6b694e219975-images\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080301 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2a34d4c-ce3b-47ce-929d-c366188eca0a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xn6\" (UniqueName: \"kubernetes.io/projected/81bf1c62-20c1-4f84-b5ec-7756d496ae64-kube-api-access-c4xn6\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080354 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d7b53f4-cc2b-4572-afee-60441a232155-secret-volume\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdbf5600-4c0e-4816-94d5-b4960e3823e6-node-bootstrap-token\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080432 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-tls\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080462 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlshq\" (UniqueName: \"kubernetes.io/projected/01338d8d-7937-46ce-889e-5fae3c8152c2-kube-api-access-jlshq\") pod \"migrator-59844c95c7-6cpps\" (UID: \"01338d8d-7937-46ce-889e-5fae3c8152c2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4963328d-9499-41bb-be08-0160ebc41b78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080515 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7564s\" (UniqueName: \"kubernetes.io/projected/4963328d-9499-41bb-be08-0160ebc41b78-kube-api-access-7564s\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080539 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-csi-data-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080567 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a34d4c-ce3b-47ce-929d-c366188eca0a-config\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080595 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-serving-cert\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.080781 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/261667fa-341d-4eb6-a5db-14064cc71663-config\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081603 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/261667fa-341d-4eb6-a5db-14064cc71663-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081660 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-audit-policies\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081731 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-socket-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081760 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-registration-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081848 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-signing-key\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081891 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081916 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xww6c\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-kube-api-access-xww6c\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081944 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/269ea902-7c9c-4223-acbe-31b0c705ef37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.081989 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/474d5614-be8d-4336-ab5d-6b694e219975-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082032 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pqf\" (UniqueName: \"kubernetes.io/projected/bdbf5600-4c0e-4816-94d5-b4960e3823e6-kube-api-access-j8pqf\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082058 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-metrics-certs\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082087 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/269ea902-7c9c-4223-acbe-31b0c705ef37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5znx\" (UniqueName: \"kubernetes.io/projected/7451eebf-43c1-46a4-bee2-24ce97bc0472-kube-api-access-f5znx\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082140 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee2e39ac-1eb4-4518-83cb-9331803431ca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082165 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-encryption-config\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082192 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-service-ca\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db4eb27-d67a-483f-8813-3921956ad64d-audit-dir\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/474d5614-be8d-4336-ab5d-6b694e219975-proxy-tls\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082341 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgzp\" (UniqueName: \"kubernetes.io/projected/8dfc137d-f565-4d31-8714-2d2343744701-kube-api-access-xqgzp\") pod \"package-server-manager-789f6589d5-pc68l\" (UID: \"8dfc137d-f565-4d31-8714-2d2343744701\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-service-ca\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082414 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082438 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa160e32-0794-4750-b720-d8b554208eb2-config-volume\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082463 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27587190-e124-4217-94c7-13a44045143d-srv-cert\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082491 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54dt6\" (UniqueName: \"kubernetes.io/projected/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-kube-api-access-54dt6\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082513 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-stats-auth\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082540 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-config\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.082565 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhnvf\" (UniqueName: \"kubernetes.io/projected/8db4eb27-d67a-483f-8813-3921956ad64d-kube-api-access-mhnvf\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.083592 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.583568186 +0000 UTC m=+138.063931582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.084572 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-config\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.084685 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.085080 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-etcd-client\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.085337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.085441 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-service-ca\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.085524 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-audit-policies\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.085595 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2e39ac-1eb4-4518-83cb-9331803431ca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.086634 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-ca\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.087226 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8db4eb27-d67a-483f-8813-3921956ad64d-audit-dir\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.093708 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-trusted-ca\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.094178 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81bf1c62-20c1-4f84-b5ec-7756d496ae64-etcd-client\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.095719 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-trusted-ca\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.096417 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc403e93-13e4-4220-9685-e3e23dd5ab3a-serving-cert\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.097298 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc403e93-13e4-4220-9685-e3e23dd5ab3a-config\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.097516 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/474d5614-be8d-4336-ab5d-6b694e219975-images\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.102309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-certificates\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.104934 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/474d5614-be8d-4336-ab5d-6b694e219975-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.106406 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/261667fa-341d-4eb6-a5db-14064cc71663-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.107320 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc403e93-13e4-4220-9685-e3e23dd5ab3a-trusted-ca\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.108149 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8db4eb27-d67a-483f-8813-3921956ad64d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.109502 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d6acb07-307a-4aa2-b316-c00e07ea41d9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.110587 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81bf1c62-20c1-4f84-b5ec-7756d496ae64-config\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.111322 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-tls\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.113582 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-encryption-config\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.116155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/269ea902-7c9c-4223-acbe-31b0c705ef37-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.117996 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/269ea902-7c9c-4223-acbe-31b0c705ef37-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.123952 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.131164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8g8d\" (UniqueName: \"kubernetes.io/projected/fc403e93-13e4-4220-9685-e3e23dd5ab3a-kube-api-access-q8g8d\") pod \"console-operator-58897d9998-vtfwd\" (UID: \"fc403e93-13e4-4220-9685-e3e23dd5ab3a\") " pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.133972 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee2e39ac-1eb4-4518-83cb-9331803431ca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.140168 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/474d5614-be8d-4336-ab5d-6b694e219975-proxy-tls\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.140259 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8db4eb27-d67a-483f-8813-3921956ad64d-serving-cert\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.140275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d6acb07-307a-4aa2-b316-c00e07ea41d9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.140515 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81bf1c62-20c1-4f84-b5ec-7756d496ae64-serving-cert\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.140740 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-metrics-tls\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.140787 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.147281 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjb6n\" (UniqueName: \"kubernetes.io/projected/5d6acb07-307a-4aa2-b316-c00e07ea41d9-kube-api-access-sjb6n\") pod \"openshift-apiserver-operator-796bbdcf4f-svcz2\" (UID: \"5d6acb07-307a-4aa2-b316-c00e07ea41d9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.165131 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6x7\" (UniqueName: \"kubernetes.io/projected/474d5614-be8d-4336-ab5d-6b694e219975-kube-api-access-bf6x7\") pod \"machine-config-operator-74547568cd-gjcvb\" (UID: \"474d5614-be8d-4336-ab5d-6b694e219975\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.176920 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e93a65a5-a2ad-4e6d-8d3d-f07480db50e7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gn9gk\" (UID: \"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184033 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184181 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mnqf\" (UniqueName: \"kubernetes.io/projected/27587190-e124-4217-94c7-13a44045143d-kube-api-access-7mnqf\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6073517-08b4-4946-b5e7-21aaaae62de3-cert\") pod \"ingress-canary-r2zpk\" (UID: \"c6073517-08b4-4946-b5e7-21aaaae62de3\") " pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184230 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx57s\" (UniqueName: \"kubernetes.io/projected/bd7a6af3-b71b-445f-81e6-4b47252f128d-kube-api-access-lx57s\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184245 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dfc137d-f565-4d31-8714-2d2343744701-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pc68l\" (UID: \"8dfc137d-f565-4d31-8714-2d2343744701\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184278 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7451eebf-43c1-46a4-bee2-24ce97bc0472-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184294 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2b3140c2-1f0b-44ac-ac93-b5862802c60b-srv-cert\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dabf56b7-6888-4d13-ad8d-d4d69016d37d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5z2hc\" (UID: \"dabf56b7-6888-4d13-ad8d-d4d69016d37d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184331 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzztr\" (UniqueName: \"kubernetes.io/projected/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-kube-api-access-mzztr\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184347 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-mountpoint-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184371 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2a34d4c-ce3b-47ce-929d-c366188eca0a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184394 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d7b53f4-cc2b-4572-afee-60441a232155-secret-volume\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184411 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdbf5600-4c0e-4816-94d5-b4960e3823e6-node-bootstrap-token\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184427 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4963328d-9499-41bb-be08-0160ebc41b78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184442 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7564s\" (UniqueName: \"kubernetes.io/projected/4963328d-9499-41bb-be08-0160ebc41b78-kube-api-access-7564s\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184457 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-csi-data-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184473 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a34d4c-ce3b-47ce-929d-c366188eca0a-config\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184489 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlshq\" (UniqueName: \"kubernetes.io/projected/01338d8d-7937-46ce-889e-5fae3c8152c2-kube-api-access-jlshq\") pod \"migrator-59844c95c7-6cpps\" (UID: \"01338d8d-7937-46ce-889e-5fae3c8152c2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184507 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-socket-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184521 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-registration-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184539 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-signing-key\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184577 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8pqf\" (UniqueName: \"kubernetes.io/projected/bdbf5600-4c0e-4816-94d5-b4960e3823e6-kube-api-access-j8pqf\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184592 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-metrics-certs\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184615 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5znx\" (UniqueName: \"kubernetes.io/projected/7451eebf-43c1-46a4-bee2-24ce97bc0472-kube-api-access-f5znx\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184646 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-service-ca\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184663 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgzp\" (UniqueName: \"kubernetes.io/projected/8dfc137d-f565-4d31-8714-2d2343744701-kube-api-access-xqgzp\") pod \"package-server-manager-789f6589d5-pc68l\" (UID: \"8dfc137d-f565-4d31-8714-2d2343744701\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184696 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa160e32-0794-4750-b720-d8b554208eb2-config-volume\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184711 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27587190-e124-4217-94c7-13a44045143d-srv-cert\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-stats-auth\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184749 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54dt6\" (UniqueName: \"kubernetes.io/projected/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-kube-api-access-54dt6\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184763 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdbf5600-4c0e-4816-94d5-b4960e3823e6-certs\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184779 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-webhook-cert\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184794 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk5nt\" (UniqueName: \"kubernetes.io/projected/e2a34d4c-ce3b-47ce-929d-c366188eca0a-kube-api-access-jk5nt\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184818 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-plugins-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjt8n\" (UniqueName: \"kubernetes.io/projected/4b63c011-53bb-4dce-8db4-849770a043ba-kube-api-access-tjt8n\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184850 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-tmpfs\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184865 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh78d\" (UniqueName: \"kubernetes.io/projected/2b3140c2-1f0b-44ac-ac93-b5862802c60b-kube-api-access-nh78d\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184885 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-oauth-config\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184913 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2a34d4c-ce3b-47ce-929d-c366188eca0a-images\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184935 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmxhc\" (UniqueName: \"kubernetes.io/projected/c32c238b-fa91-4017-b566-10cece74d21a-kube-api-access-nmxhc\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184956 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lldqt\" (UniqueName: \"kubernetes.io/projected/c6073517-08b4-4946-b5e7-21aaaae62de3-kube-api-access-lldqt\") pod \"ingress-canary-r2zpk\" (UID: \"c6073517-08b4-4946-b5e7-21aaaae62de3\") " pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184974 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27587190-e124-4217-94c7-13a44045143d-profile-collector-cert\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.184988 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-console-config\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185006 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2q98\" (UniqueName: \"kubernetes.io/projected/dabf56b7-6888-4d13-ad8d-d4d69016d37d-kube-api-access-k2q98\") pod \"multus-admission-controller-857f4d67dd-5z2hc\" (UID: \"dabf56b7-6888-4d13-ad8d-d4d69016d37d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185022 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-signing-cabundle\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185055 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wpp5\" (UniqueName: \"kubernetes.io/projected/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-kube-api-access-7wpp5\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185074 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7a6af3-b71b-445f-81e6-4b47252f128d-config\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185090 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsf5z\" (UniqueName: \"kubernetes.io/projected/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-kube-api-access-wsf5z\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185109 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c3f6594-0f93-41e5-89c4-96cc33522f7f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpksx\" (UID: \"6c3f6594-0f93-41e5-89c4-96cc33522f7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185125 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfd2\" (UniqueName: \"kubernetes.io/projected/6c3f6594-0f93-41e5-89c4-96cc33522f7f-kube-api-access-drfd2\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpksx\" (UID: \"6c3f6594-0f93-41e5-89c4-96cc33522f7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7a6af3-b71b-445f-81e6-4b47252f128d-serving-cert\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185159 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b7f7288-854e-407f-957d-7c4b0b0e4a46-metrics-tls\") pod \"dns-operator-744455d44c-cxmqc\" (UID: \"6b7f7288-854e-407f-957d-7c4b0b0e4a46\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185175 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-default-certificate\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185189 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4963328d-9499-41bb-be08-0160ebc41b78-proxy-tls\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185207 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-serving-cert\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185224 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md7bk\" (UniqueName: \"kubernetes.io/projected/3d7b53f4-cc2b-4572-afee-60441a232155-kube-api-access-md7bk\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185245 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa160e32-0794-4750-b720-d8b554208eb2-metrics-tls\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185261 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm2vg\" (UniqueName: \"kubernetes.io/projected/6b7f7288-854e-407f-957d-7c4b0b0e4a46-kube-api-access-xm2vg\") pod \"dns-operator-744455d44c-cxmqc\" (UID: \"6b7f7288-854e-407f-957d-7c4b0b0e4a46\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185284 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185307 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451eebf-43c1-46a4-bee2-24ce97bc0472-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185323 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185343 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185360 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-oauth-serving-cert\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185375 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2b3140c2-1f0b-44ac-ac93-b5862802c60b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185393 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d7b53f4-cc2b-4572-afee-60441a232155-config-volume\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185409 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btq4\" (UniqueName: \"kubernetes.io/projected/aa160e32-0794-4750-b720-d8b554208eb2-kube-api-access-4btq4\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185431 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-trusted-ca-bundle\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.185447 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c32c238b-fa91-4017-b566-10cece74d21a-service-ca-bundle\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.186182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c32c238b-fa91-4017-b566-10cece74d21a-service-ca-bundle\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.186257 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.686243571 +0000 UTC m=+138.166606947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.189830 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-tmpfs\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.190327 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-registration-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.190581 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6073517-08b4-4946-b5e7-21aaaae62de3-cert\") pod \"ingress-canary-r2zpk\" (UID: \"c6073517-08b4-4946-b5e7-21aaaae62de3\") " pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.191588 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2a34d4c-ce3b-47ce-929d-c366188eca0a-images\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.192384 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-oauth-serving-cert\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.192584 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-serving-cert\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.193115 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dfc137d-f565-4d31-8714-2d2343744701-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pc68l\" (UID: \"8dfc137d-f565-4d31-8714-2d2343744701\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.193337 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451eebf-43c1-46a4-bee2-24ce97bc0472-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.193699 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.193974 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7a6af3-b71b-445f-81e6-4b47252f128d-serving-cert\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.194653 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4963328d-9499-41bb-be08-0160ebc41b78-proxy-tls\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.195265 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-default-certificate\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.195449 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d7b53f4-cc2b-4572-afee-60441a232155-config-volume\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.195801 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.195893 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-csi-data-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.195967 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa160e32-0794-4750-b720-d8b554208eb2-config-volume\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.196228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a34d4c-ce3b-47ce-929d-c366188eca0a-config\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.196363 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa160e32-0794-4750-b720-d8b554208eb2-metrics-tls\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.196436 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-mountpoint-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.196499 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c3f6594-0f93-41e5-89c4-96cc33522f7f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpksx\" (UID: \"6c3f6594-0f93-41e5-89c4-96cc33522f7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.197301 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-socket-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.198843 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-signing-cabundle\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.199333 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd7a6af3-b71b-445f-81e6-4b47252f128d-config\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.199589 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-trusted-ca-bundle\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.200088 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.200189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4963328d-9499-41bb-be08-0160ebc41b78-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.197253 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/269ea902-7c9c-4223-acbe-31b0c705ef37-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.200493 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-signing-key\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.200768 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dabf56b7-6888-4d13-ad8d-d4d69016d37d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-5z2hc\" (UID: \"dabf56b7-6888-4d13-ad8d-d4d69016d37d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.200868 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-service-ca\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.200968 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-plugins-dir\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.201146 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/27587190-e124-4217-94c7-13a44045143d-srv-cert\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.201184 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-oauth-config\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.201994 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-metrics-certs\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.202055 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdbf5600-4c0e-4816-94d5-b4960e3823e6-node-bootstrap-token\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.195773 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b7f7288-854e-407f-957d-7c4b0b0e4a46-metrics-tls\") pod \"dns-operator-744455d44c-cxmqc\" (UID: \"6b7f7288-854e-407f-957d-7c4b0b0e4a46\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.202604 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-console-config\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.203851 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7451eebf-43c1-46a4-bee2-24ce97bc0472-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.203941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c32c238b-fa91-4017-b566-10cece74d21a-stats-auth\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.203955 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/27587190-e124-4217-94c7-13a44045143d-profile-collector-cert\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.206045 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2b3140c2-1f0b-44ac-ac93-b5862802c60b-srv-cert\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.206136 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-webhook-cert\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.206952 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdbf5600-4c0e-4816-94d5-b4960e3823e6-certs\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.207712 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2a34d4c-ce3b-47ce-929d-c366188eca0a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.211526 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2b3140c2-1f0b-44ac-ac93-b5862802c60b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.212064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d7b53f4-cc2b-4572-afee-60441a232155-secret-volume\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.226468 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6zmj\" (UniqueName: \"kubernetes.io/projected/269ea902-7c9c-4223-acbe-31b0c705ef37-kube-api-access-g6zmj\") pod \"cluster-image-registry-operator-dc59b4c8b-9tg74\" (UID: \"269ea902-7c9c-4223-acbe-31b0c705ef37\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.234009 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/261667fa-341d-4eb6-a5db-14064cc71663-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-njdcr\" (UID: \"261667fa-341d-4eb6-a5db-14064cc71663\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.256023 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee2e39ac-1eb4-4518-83cb-9331803431ca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8b4g2\" (UID: \"ee2e39ac-1eb4-4518-83cb-9331803431ca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.280472 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.286935 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.287546 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.787532139 +0000 UTC m=+138.267895525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.299740 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.301682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms77v\" (UniqueName: \"kubernetes.io/projected/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-kube-api-access-ms77v\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.305987 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.333662 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-bound-sa-token\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.341704 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ea0c85a9-22f8-45d0-b22a-140fc3f0bb05-bound-sa-token\") pod \"ingress-operator-5b745b69d9-cw57s\" (UID: \"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.355501 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xww6c\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-kube-api-access-xww6c\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.362112 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.362471 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.364406 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nc4fw"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.369865 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.375442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" Jan 23 07:57:43 crc kubenswrapper[4982]: W0123 07:57:43.378172 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod352ea8f0_9969_4a48_9b37_c77819f1473a.slice/crio-1a114915642e7e20820b8d8672485cdc71735140c8a8c7d0061d8cc670cb1588 WatchSource:0}: Error finding container 1a114915642e7e20820b8d8672485cdc71735140c8a8c7d0061d8cc670cb1588: Status 404 returned error can't find the container with id 1a114915642e7e20820b8d8672485cdc71735140c8a8c7d0061d8cc670cb1588 Jan 23 07:57:43 crc kubenswrapper[4982]: W0123 07:57:43.380160 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c271848_a6a6_4ca1_8ee6_fd4fa7b1a83e.slice/crio-0a7640133b6c8148ca8e27488608c173c7e250e10a078d9f1d5db1a9a22bc951 WatchSource:0}: Error finding container 0a7640133b6c8148ca8e27488608c173c7e250e10a078d9f1d5db1a9a22bc951: Status 404 returned error can't find the container with id 0a7640133b6c8148ca8e27488608c173c7e250e10a078d9f1d5db1a9a22bc951 Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.382936 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.384425 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzthf\" (UniqueName: \"kubernetes.io/projected/22a8f798-f137-4e9b-b1ba-09f8a84179be-kube-api-access-nzthf\") pod \"downloads-7954f5f757-jzgnr\" (UID: \"22a8f798-f137-4e9b-b1ba-09f8a84179be\") " pod="openshift-console/downloads-7954f5f757-jzgnr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.387733 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.388169 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.888136518 +0000 UTC m=+138.368499914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.390827 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.390968 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.391216 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.89120086 +0000 UTC m=+138.371564246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.401803 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xn6\" (UniqueName: \"kubernetes.io/projected/81bf1c62-20c1-4f84-b5ec-7756d496ae64-kube-api-access-c4xn6\") pod \"etcd-operator-b45778765-5qgb4\" (UID: \"81bf1c62-20c1-4f84-b5ec-7756d496ae64\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.426450 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhnvf\" (UniqueName: \"kubernetes.io/projected/8db4eb27-d67a-483f-8813-3921956ad64d-kube-api-access-mhnvf\") pod \"apiserver-7bbb656c7d-bhcdz\" (UID: \"8db4eb27-d67a-483f-8813-3921956ad64d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.426825 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hrg76"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.446038 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mnqf\" (UniqueName: \"kubernetes.io/projected/27587190-e124-4217-94c7-13a44045143d-kube-api-access-7mnqf\") pod \"catalog-operator-68c6474976-hwqwn\" (UID: \"27587190-e124-4217-94c7-13a44045143d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.451403 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.455797 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjt8n\" (UniqueName: \"kubernetes.io/projected/4b63c011-53bb-4dce-8db4-849770a043ba-kube-api-access-tjt8n\") pod \"console-f9d7485db-qdpxs\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.476638 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx57s\" (UniqueName: \"kubernetes.io/projected/bd7a6af3-b71b-445f-81e6-4b47252f128d-kube-api-access-lx57s\") pod \"service-ca-operator-777779d784-4hr6k\" (UID: \"bd7a6af3-b71b-445f-81e6-4b47252f128d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.482021 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.492230 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.492783 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:43.992759175 +0000 UTC m=+138.473122561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.501809 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh78d\" (UniqueName: \"kubernetes.io/projected/2b3140c2-1f0b-44ac-ac93-b5862802c60b-kube-api-access-nh78d\") pod \"olm-operator-6b444d44fb-tjc5z\" (UID: \"2b3140c2-1f0b-44ac-ac93-b5862802c60b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.505029 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.541671 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.544195 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x9vz"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.551121 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmxhc\" (UniqueName: \"kubernetes.io/projected/c32c238b-fa91-4017-b566-10cece74d21a-kube-api-access-nmxhc\") pod \"router-default-5444994796-sw7s7\" (UID: \"c32c238b-fa91-4017-b566-10cece74d21a\") " pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.551813 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgf6m"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.558972 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.562303 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.562703 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.564774 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lldqt\" (UniqueName: \"kubernetes.io/projected/c6073517-08b4-4946-b5e7-21aaaae62de3-kube-api-access-lldqt\") pod \"ingress-canary-r2zpk\" (UID: \"c6073517-08b4-4946-b5e7-21aaaae62de3\") " pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.576751 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" event={"ID":"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e","Type":"ContainerStarted","Data":"0a7640133b6c8148ca8e27488608c173c7e250e10a078d9f1d5db1a9a22bc951"} Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.581050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jzgnr" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.586457 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md7bk\" (UniqueName: \"kubernetes.io/projected/3d7b53f4-cc2b-4572-afee-60441a232155-kube-api-access-md7bk\") pod \"collect-profiles-29485905-slggk\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.591082 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" event={"ID":"302fd8a4-7ced-4585-b5f7-15fa67d0d01e","Type":"ContainerStarted","Data":"6cd11fbad3c152bb10bf5b8bcd28a2ce089776ae099d8a3c3d54534fc10605a3"} Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.593549 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.594097 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.094081823 +0000 UTC m=+138.574445209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.596444 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsf5z\" (UniqueName: \"kubernetes.io/projected/3b6912da-3a32-4b32-89bf-ef8dc1a3fff7-kube-api-access-wsf5z\") pod \"csi-hostpathplugin-bnwv8\" (UID: \"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7\") " pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.596547 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" event={"ID":"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c","Type":"ContainerStarted","Data":"3afb31b86db1d6871202a458694ff384329c7edeb0b9d9412b03b65090d3e2f3"} Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.596592 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" event={"ID":"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c","Type":"ContainerStarted","Data":"4548c079d488b3770b0ad73af4824390e9557847d54c4b46133f154de06ff8d0"} Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.598371 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" event={"ID":"352ea8f0-9969-4a48-9b37-c77819f1473a","Type":"ContainerStarted","Data":"1a114915642e7e20820b8d8672485cdc71735140c8a8c7d0061d8cc670cb1588"} Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.599175 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wpp5\" (UniqueName: \"kubernetes.io/projected/cd5e5739-906e-4bfa-8ee9-521ddc8d22d8-kube-api-access-7wpp5\") pod \"packageserver-d55dfcdfc-q7frt\" (UID: \"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: W0123 07:57:43.624342 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod458b850a_fd8f_4b10_b504_76f0db87d7ad.slice/crio-b2094d2d78ff47dda3f51fd8b2ca5032e4b434ca38d1e3abf5ecc8436d402e44 WatchSource:0}: Error finding container b2094d2d78ff47dda3f51fd8b2ca5032e4b434ca38d1e3abf5ecc8436d402e44: Status 404 returned error can't find the container with id b2094d2d78ff47dda3f51fd8b2ca5032e4b434ca38d1e3abf5ecc8436d402e44 Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.627998 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.652061 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btq4\" (UniqueName: \"kubernetes.io/projected/aa160e32-0794-4750-b720-d8b554208eb2-kube-api-access-4btq4\") pod \"dns-default-594h7\" (UID: \"aa160e32-0794-4750-b720-d8b554208eb2\") " pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.653090 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm2vg\" (UniqueName: \"kubernetes.io/projected/6b7f7288-854e-407f-957d-7c4b0b0e4a46-kube-api-access-xm2vg\") pod \"dns-operator-744455d44c-cxmqc\" (UID: \"6b7f7288-854e-407f-957d-7c4b0b0e4a46\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.662585 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlshq\" (UniqueName: \"kubernetes.io/projected/01338d8d-7937-46ce-889e-5fae3c8152c2-kube-api-access-jlshq\") pod \"migrator-59844c95c7-6cpps\" (UID: \"01338d8d-7937-46ce-889e-5fae3c8152c2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.695500 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.695608 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.195589887 +0000 UTC m=+138.675953273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.696165 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.696340 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.697540 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.197521139 +0000 UTC m=+138.677884525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.704878 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.705174 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5znx\" (UniqueName: \"kubernetes.io/projected/7451eebf-43c1-46a4-bee2-24ce97bc0472-kube-api-access-f5znx\") pod \"kube-storage-version-migrator-operator-b67b599dd-lbdlv\" (UID: \"7451eebf-43c1-46a4-bee2-24ce97bc0472\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.705769 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8pqf\" (UniqueName: \"kubernetes.io/projected/bdbf5600-4c0e-4816-94d5-b4960e3823e6-kube-api-access-j8pqf\") pod \"machine-config-server-2zsl7\" (UID: \"bdbf5600-4c0e-4816-94d5-b4960e3823e6\") " pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.723749 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk5nt\" (UniqueName: \"kubernetes.io/projected/e2a34d4c-ce3b-47ce-929d-c366188eca0a-kube-api-access-jk5nt\") pod \"machine-api-operator-5694c8668f-htgd9\" (UID: \"e2a34d4c-ce3b-47ce-929d-c366188eca0a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.726257 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.733043 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vtfwd"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.738829 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54dt6\" (UniqueName: \"kubernetes.io/projected/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-kube-api-access-54dt6\") pod \"marketplace-operator-79b997595-9rvcj\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.749543 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.756065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzztr\" (UniqueName: \"kubernetes.io/projected/83a5ba89-988b-44ec-b584-d39bf9a8b0ea-kube-api-access-mzztr\") pod \"service-ca-9c57cc56f-pqvqq\" (UID: \"83a5ba89-988b-44ec-b584-d39bf9a8b0ea\") " pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.761968 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.772211 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.773204 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.787974 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2q98\" (UniqueName: \"kubernetes.io/projected/dabf56b7-6888-4d13-ad8d-d4d69016d37d-kube-api-access-k2q98\") pod \"multus-admission-controller-857f4d67dd-5z2hc\" (UID: \"dabf56b7-6888-4d13-ad8d-d4d69016d37d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.788917 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.801111 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.801188 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.801262 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.301230701 +0000 UTC m=+138.781594087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.801295 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.801771 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.301762366 +0000 UTC m=+138.782125742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.808191 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.816602 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7564s\" (UniqueName: \"kubernetes.io/projected/4963328d-9499-41bb-be08-0160ebc41b78-kube-api-access-7564s\") pod \"machine-config-controller-84d6567774-tzcmp\" (UID: \"4963328d-9499-41bb-be08-0160ebc41b78\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.820698 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.821360 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.823938 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgzp\" (UniqueName: \"kubernetes.io/projected/8dfc137d-f565-4d31-8714-2d2343744701-kube-api-access-xqgzp\") pod \"package-server-manager-789f6589d5-pc68l\" (UID: \"8dfc137d-f565-4d31-8714-2d2343744701\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.831063 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-594h7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.839287 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2zsl7" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.840795 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-r2zpk" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.846876 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfd2\" (UniqueName: \"kubernetes.io/projected/6c3f6594-0f93-41e5-89c4-96cc33522f7f-kube-api-access-drfd2\") pod \"control-plane-machine-set-operator-78cbb6b69f-xpksx\" (UID: \"6c3f6594-0f93-41e5-89c4-96cc33522f7f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.858761 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.888329 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s"] Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.904090 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.904303 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.404265276 +0000 UTC m=+138.884628662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:43 crc kubenswrapper[4982]: I0123 07:57:43.904677 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:43 crc kubenswrapper[4982]: E0123 07:57:43.905246 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.405222851 +0000 UTC m=+138.885586237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.005460 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.006273 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.506237992 +0000 UTC m=+138.986601728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.007438 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.007799 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.507783243 +0000 UTC m=+138.988146629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.011409 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.017938 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.051840 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.104992 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.109278 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.109687 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.609666127 +0000 UTC m=+139.090029513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.110139 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.165539 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269ea902_7c9c_4223_acbe_31b0c705ef37.slice/crio-ff8de0a79ead93e11e71e3073d19b4df0c1aecad6c30670a7ce2143cc05bdaff WatchSource:0}: Error finding container ff8de0a79ead93e11e71e3073d19b4df0c1aecad6c30670a7ce2143cc05bdaff: Status 404 returned error can't find the container with id ff8de0a79ead93e11e71e3073d19b4df0c1aecad6c30670a7ce2143cc05bdaff Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.166843 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d6acb07_307a_4aa2_b316_c00e07ea41d9.slice/crio-6bb57ed7fe0eedf9bcbe6e41e7b250e0fa0723335543dd6f7e2a105f99227dc1 WatchSource:0}: Error finding container 6bb57ed7fe0eedf9bcbe6e41e7b250e0fa0723335543dd6f7e2a105f99227dc1: Status 404 returned error can't find the container with id 6bb57ed7fe0eedf9bcbe6e41e7b250e0fa0723335543dd6f7e2a105f99227dc1 Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.167818 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea0c85a9_22f8_45d0_b22a_140fc3f0bb05.slice/crio-b46df99564bc36ca996960f5ba65467f9471308c119d5cab6ecef406c6afac10 WatchSource:0}: Error finding container b46df99564bc36ca996960f5ba65467f9471308c119d5cab6ecef406c6afac10: Status 404 returned error can't find the container with id b46df99564bc36ca996960f5ba65467f9471308c119d5cab6ecef406c6afac10 Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.211060 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.211477 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.711455578 +0000 UTC m=+139.191818954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.237024 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.296692 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.297256 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.298668 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.302312 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.306676 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.311772 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.312001 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.811964294 +0000 UTC m=+139.292327690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.312360 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.313178 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.813160216 +0000 UTC m=+139.293523602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.376769 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261667fa_341d_4eb6_a5db_14064cc71663.slice/crio-35141da412d835831272fb87c957e0bcfd5c852395ff5f352186f1ca222f4baa WatchSource:0}: Error finding container 35141da412d835831272fb87c957e0bcfd5c852395ff5f352186f1ca222f4baa: Status 404 returned error can't find the container with id 35141da412d835831272fb87c957e0bcfd5c852395ff5f352186f1ca222f4baa Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.380255 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.382604 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5qgb4"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.384123 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jzgnr"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.386234 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qdpxs"] Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.404542 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode93a65a5_a2ad_4e6d_8d3d_f07480db50e7.slice/crio-9293cb211adc59fd9128d1a508d821bdfb2f1e5a1b9800c6bb4ea534c2960c23 WatchSource:0}: Error finding container 9293cb211adc59fd9128d1a508d821bdfb2f1e5a1b9800c6bb4ea534c2960c23: Status 404 returned error can't find the container with id 9293cb211adc59fd9128d1a508d821bdfb2f1e5a1b9800c6bb4ea534c2960c23 Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.405322 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod474d5614_be8d_4336_ab5d_6b694e219975.slice/crio-c4269325fbd84b5dfa585167a5fe99aa9994b313a87d4d5ae311f094946eb918 WatchSource:0}: Error finding container c4269325fbd84b5dfa585167a5fe99aa9994b313a87d4d5ae311f094946eb918: Status 404 returned error can't find the container with id c4269325fbd84b5dfa585167a5fe99aa9994b313a87d4d5ae311f094946eb918 Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.407019 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd5e5739_906e_4bfa_8ee9_521ddc8d22d8.slice/crio-fe9c66b2e0a06273c17a1a45c35e9ffba4bdc4060f0d14dbc4ad61b95ce34891 WatchSource:0}: Error finding container fe9c66b2e0a06273c17a1a45c35e9ffba4bdc4060f0d14dbc4ad61b95ce34891: Status 404 returned error can't find the container with id fe9c66b2e0a06273c17a1a45c35e9ffba4bdc4060f0d14dbc4ad61b95ce34891 Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.413779 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.414247 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:44.914229088 +0000 UTC m=+139.394592464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.461583 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27587190_e124_4217_94c7_13a44045143d.slice/crio-5c950b23ae54b110a7b54f8e69c3b45d8fbd3ef2d0214c359f887414e4b13b95 WatchSource:0}: Error finding container 5c950b23ae54b110a7b54f8e69c3b45d8fbd3ef2d0214c359f887414e4b13b95: Status 404 returned error can't find the container with id 5c950b23ae54b110a7b54f8e69c3b45d8fbd3ef2d0214c359f887414e4b13b95 Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.469793 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7a6af3_b71b_445f_81e6_4b47252f128d.slice/crio-2d92e7cf43ef3ac59570fab044bcb89a69229b92c1e2d054f94a540f881aa858 WatchSource:0}: Error finding container 2d92e7cf43ef3ac59570fab044bcb89a69229b92c1e2d054f94a540f881aa858: Status 404 returned error can't find the container with id 2d92e7cf43ef3ac59570fab044bcb89a69229b92c1e2d054f94a540f881aa858 Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.516999 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.517791 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.017423957 +0000 UTC m=+139.497787353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.584521 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-r2zpk"] Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.618308 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.618650 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.118598261 +0000 UTC m=+139.598961647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.621742 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.622088 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.122072705 +0000 UTC m=+139.602436091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.630944 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qdpxs" event={"ID":"4b63c011-53bb-4dce-8db4-849770a043ba","Type":"ContainerStarted","Data":"516f54b80305fc4767778a21b36735a669380fb5e697c3275999756499355cda"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.638001 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" event={"ID":"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e","Type":"ContainerStarted","Data":"d801644df823e0701aa8982ce672994ed9c338584d654bb21f040af0184ad505"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.638275 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.639594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" event={"ID":"81bf1c62-20c1-4f84-b5ec-7756d496ae64","Type":"ContainerStarted","Data":"29df0db560ea390968d669f5a8fb7d223aebf9861ab733e77e21ff0b8f48bfb8"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.642808 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jzgnr" event={"ID":"22a8f798-f137-4e9b-b1ba-09f8a84179be","Type":"ContainerStarted","Data":"b32c81c48956a1357894f57ecdf9a4f7d023015c9bc9ea572b25ced56287a9a8"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.658818 4982 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nc4fw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.658973 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.659241 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" event={"ID":"2b188236-85a4-405c-9651-ba4b15dc4956","Type":"ContainerStarted","Data":"6e9bcc28ef8eb8e648a78a0070c25824bb20955ccfaa1270abbabc4c44b83d4b"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.659389 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" event={"ID":"2b188236-85a4-405c-9651-ba4b15dc4956","Type":"ContainerStarted","Data":"7ef1ff2b7b9d001629bbc249ea7181de923a3b463a10c04d412dd23cac6f0005"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.660193 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.663042 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sw7s7" event={"ID":"c32c238b-fa91-4017-b566-10cece74d21a","Type":"ContainerStarted","Data":"ec94dbf21b34da725562fd46be81511afa19d51d9f3828ac12afdcad78885cd8"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.667167 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" event={"ID":"ee2e39ac-1eb4-4518-83cb-9331803431ca","Type":"ContainerStarted","Data":"b74dd6802a6488c9e75ffcb39d0e1a93b4fe0bea47b8e46a0e6a28c2a96eceda"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.675609 4982 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lrw5k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.675677 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.677632 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" event={"ID":"e4d580f9-7993-4027-804a-0a1dcc4e291c","Type":"ContainerStarted","Data":"b07006288bd0704ddf75b3ed686983cce58c8be105497ce0d5c833e0a3ff40a5"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.677668 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" event={"ID":"e4d580f9-7993-4027-804a-0a1dcc4e291c","Type":"ContainerStarted","Data":"5bd116d2d16d3acdff5381e592eef5c2b1c59dbb7b7f219371880800b22692b0"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.691329 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" event={"ID":"5d6acb07-307a-4aa2-b316-c00e07ea41d9","Type":"ContainerStarted","Data":"6bb57ed7fe0eedf9bcbe6e41e7b250e0fa0723335543dd6f7e2a105f99227dc1"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.695797 4982 generic.go:334] "Generic (PLEG): container finished" podID="302fd8a4-7ced-4585-b5f7-15fa67d0d01e" containerID="8b860818d644d6283e4fccb76dc369dc399afce866d9a2a272037584d15c772e" exitCode=0 Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.696074 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" event={"ID":"302fd8a4-7ced-4585-b5f7-15fa67d0d01e","Type":"ContainerDied","Data":"8b860818d644d6283e4fccb76dc369dc399afce866d9a2a272037584d15c772e"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.698183 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" event={"ID":"8db4eb27-d67a-483f-8813-3921956ad64d","Type":"ContainerStarted","Data":"e14a9924dd662fffd4affac8d5564506a1ac86d8b83141fb3a9e4b20e48cc38e"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.709129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" event={"ID":"57bc3220-bc9e-44ac-89a7-67b8996394f0","Type":"ContainerStarted","Data":"030b5506ad8ec6743323446963e039246be5246ca53a7ce6a006865292197d3a"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.718916 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" event={"ID":"27587190-e124-4217-94c7-13a44045143d","Type":"ContainerStarted","Data":"5c950b23ae54b110a7b54f8e69c3b45d8fbd3ef2d0214c359f887414e4b13b95"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.720791 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" event={"ID":"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05","Type":"ContainerStarted","Data":"b46df99564bc36ca996960f5ba65467f9471308c119d5cab6ecef406c6afac10"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.722611 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: W0123 07:57:44.723499 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6073517_08b4_4946_b5e7_21aaaae62de3.slice/crio-51c8e9490cd67dd2027ff8c2b233a9b9f9502a589028f58f4cf48195847193d8 WatchSource:0}: Error finding container 51c8e9490cd67dd2027ff8c2b233a9b9f9502a589028f58f4cf48195847193d8: Status 404 returned error can't find the container with id 51c8e9490cd67dd2027ff8c2b233a9b9f9502a589028f58f4cf48195847193d8 Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.724091 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.224063601 +0000 UTC m=+139.704426987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.724182 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.725547 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.22553133 +0000 UTC m=+139.705894716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.754384 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2zsl7" event={"ID":"bdbf5600-4c0e-4816-94d5-b4960e3823e6","Type":"ContainerStarted","Data":"74383a7bc611e015f44b6aa1ad9bc5004eae1e242e7bf969ede3eec25573c683"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.758456 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" event={"ID":"474d5614-be8d-4336-ab5d-6b694e219975","Type":"ContainerStarted","Data":"c4269325fbd84b5dfa585167a5fe99aa9994b313a87d4d5ae311f094946eb918"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.762292 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" event={"ID":"0f80f80a-64b1-4eba-afd6-1ed4ec11d026","Type":"ContainerStarted","Data":"a13b7f049255e16615f1553648d066e797f34b2de4117e961e531692095c312a"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.764609 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" event={"ID":"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7","Type":"ContainerStarted","Data":"9293cb211adc59fd9128d1a508d821bdfb2f1e5a1b9800c6bb4ea534c2960c23"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.772087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" event={"ID":"458b850a-fd8f-4b10-b504-76f0db87d7ad","Type":"ContainerStarted","Data":"b2094d2d78ff47dda3f51fd8b2ca5032e4b434ca38d1e3abf5ecc8436d402e44"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.774820 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" event={"ID":"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8","Type":"ContainerStarted","Data":"fe9c66b2e0a06273c17a1a45c35e9ffba4bdc4060f0d14dbc4ad61b95ce34891"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.776281 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" event={"ID":"269ea902-7c9c-4223-acbe-31b0c705ef37","Type":"ContainerStarted","Data":"ff8de0a79ead93e11e71e3073d19b4df0c1aecad6c30670a7ce2143cc05bdaff"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.785094 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" event={"ID":"261667fa-341d-4eb6-a5db-14064cc71663","Type":"ContainerStarted","Data":"35141da412d835831272fb87c957e0bcfd5c852395ff5f352186f1ca222f4baa"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.796823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" event={"ID":"fc403e93-13e4-4220-9685-e3e23dd5ab3a","Type":"ContainerStarted","Data":"e9191d3e167d58bfab484246f89b4128323172fb4636f221b5d29cc399d09956"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.804496 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" event={"ID":"86e87a68-cf70-4b16-8d78-8ea0e3d9bf5c","Type":"ContainerStarted","Data":"7d594efc0d315c02f388827c9eb5efb220bcac3bf044378141ea998c38421a5a"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.808269 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" event={"ID":"352ea8f0-9969-4a48-9b37-c77819f1473a","Type":"ContainerStarted","Data":"d133eddf8d76967008471bf89e16efc13c6a94a8e0ea83d23336c94d9b2b4205"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.822446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" event={"ID":"bd7a6af3-b71b-445f-81e6-4b47252f128d","Type":"ContainerStarted","Data":"2d92e7cf43ef3ac59570fab044bcb89a69229b92c1e2d054f94a540f881aa858"} Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.825987 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.826522 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.326493839 +0000 UTC m=+139.806857225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.826750 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.827924 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.327902367 +0000 UTC m=+139.808265753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.927689 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.927855 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.427809998 +0000 UTC m=+139.908173384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:44 crc kubenswrapper[4982]: I0123 07:57:44.927987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:44 crc kubenswrapper[4982]: E0123 07:57:44.928996 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.428971139 +0000 UTC m=+139.909334525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.035962 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.036891 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.536825483 +0000 UTC m=+140.017188869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.142341 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.142775 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.642759805 +0000 UTC m=+140.123123191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.223794 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rvcj"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.233223 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cxmqc"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.245146 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.245794 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.745771829 +0000 UTC m=+140.226135215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.305092 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.336240 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.340162 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-n4294" podStartSLOduration=121.34012736 podStartE2EDuration="2m1.34012736s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:45.3274551 +0000 UTC m=+139.807818506" watchObservedRunningTime="2026-01-23 07:57:45.34012736 +0000 UTC m=+139.820490746" Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.354133 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.354611 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.854596628 +0000 UTC m=+140.334960004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.380856 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-txk5n" podStartSLOduration=120.380836172 podStartE2EDuration="2m0.380836172s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:45.380455122 +0000 UTC m=+139.860818508" watchObservedRunningTime="2026-01-23 07:57:45.380836172 +0000 UTC m=+139.861199558" Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.450848 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-pqvqq"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.456335 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" podStartSLOduration=120.456309407 podStartE2EDuration="2m0.456309407s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:45.453105811 +0000 UTC m=+139.933469187" watchObservedRunningTime="2026-01-23 07:57:45.456309407 +0000 UTC m=+139.936672793" Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.457723 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.458186 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:45.958170947 +0000 UTC m=+140.438534323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.487276 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bnwv8"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.497887 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-5z2hc"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.503177 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" podStartSLOduration=120.503141114 podStartE2EDuration="2m0.503141114s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:45.492786616 +0000 UTC m=+139.973150002" watchObservedRunningTime="2026-01-23 07:57:45.503141114 +0000 UTC m=+139.983504500" Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.559426 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.560142 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.060126233 +0000 UTC m=+140.540489619 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.638217 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.660202 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.670983 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.170926045 +0000 UTC m=+140.651289431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.684246 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 07:57:45 crc kubenswrapper[4982]: W0123 07:57:45.708804 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddabf56b7_6888_4d13_ad8d_d4d69016d37d.slice/crio-21314356618f8950c61cfaec9eb48c514606fd074031906b85d68dc447d700eb WatchSource:0}: Error finding container 21314356618f8950c61cfaec9eb48c514606fd074031906b85d68dc447d700eb: Status 404 returned error can't find the container with id 21314356618f8950c61cfaec9eb48c514606fd074031906b85d68dc447d700eb Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.735342 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-htgd9"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.741292 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk"] Jan 23 07:57:45 crc kubenswrapper[4982]: W0123 07:57:45.742769 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b6912da_3a32_4b32_89bf_ef8dc1a3fff7.slice/crio-a36ee85b06e1aa8893316e213ff0e65e78259b8db638b504764849d1be1f1052 WatchSource:0}: Error finding container a36ee85b06e1aa8893316e213ff0e65e78259b8db638b504764849d1be1f1052: Status 404 returned error can't find the container with id a36ee85b06e1aa8893316e213ff0e65e78259b8db638b504764849d1be1f1052 Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.779176 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.780266 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.280244118 +0000 UTC m=+140.760607504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.826133 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.881139 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.890120 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.390091316 +0000 UTC m=+140.870454702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.895486 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-594h7"] Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.985377 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" event={"ID":"dabf56b7-6888-4d13-ad8d-d4d69016d37d","Type":"ContainerStarted","Data":"21314356618f8950c61cfaec9eb48c514606fd074031906b85d68dc447d700eb"} Jan 23 07:57:45 crc kubenswrapper[4982]: I0123 07:57:45.991587 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:45 crc kubenswrapper[4982]: E0123 07:57:45.992406 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.49238065 +0000 UTC m=+140.972744036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.018939 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" event={"ID":"83a5ba89-988b-44ec-b584-d39bf9a8b0ea","Type":"ContainerStarted","Data":"ffb9874680a1ebd3c54dbaec2c5638548ffad7f2fc053d4ee9bb9abf67e58f9b"} Jan 23 07:57:46 crc kubenswrapper[4982]: W0123 07:57:46.043724 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b3140c2_1f0b_44ac_ac93_b5862802c60b.slice/crio-0b8cd1324012fc29553279548e18dfa0ea0871566f69836f85366ba0de39b450 WatchSource:0}: Error finding container 0b8cd1324012fc29553279548e18dfa0ea0871566f69836f85366ba0de39b450: Status 404 returned error can't find the container with id 0b8cd1324012fc29553279548e18dfa0ea0871566f69836f85366ba0de39b450 Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.053660 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" event={"ID":"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef","Type":"ContainerStarted","Data":"1389c1a34cc2cc63e75fa7b56684a468297614298df7b9c78a3c7830b7b042ab"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.072354 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.082605 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp"] Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.087465 4982 patch_prober.go:28] interesting pod/console-operator-58897d9998-vtfwd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.087522 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" podUID="fc403e93-13e4-4220-9685-e3e23dd5ab3a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.092831 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" event={"ID":"01338d8d-7937-46ce-889e-5fae3c8152c2","Type":"ContainerStarted","Data":"9c82878f004b8abb6aecdd6ee9a8d7d9586fc670bf96eb069186486fe1d34d69"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.115214 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.119006 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.618975716 +0000 UTC m=+141.099339102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.119283 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.119711 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.619702885 +0000 UTC m=+141.100066271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.122920 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" event={"ID":"474d5614-be8d-4336-ab5d-6b694e219975","Type":"ContainerStarted","Data":"ff5d2976d988ce5f5bc2dd6709340d81c1b9fa83413020a05c59d028cef789fc"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.192540 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" event={"ID":"458b850a-fd8f-4b10-b504-76f0db87d7ad","Type":"ContainerStarted","Data":"54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.193081 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.213800 4982 csr.go:261] certificate signing request csr-q5xk9 is approved, waiting to be issued Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.217841 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" event={"ID":"7451eebf-43c1-46a4-bee2-24ce97bc0472","Type":"ContainerStarted","Data":"4fc2978f68f8656a9a9de49c430322fcf15d76c8eae7fffbf1404853f1a4ddca"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.221353 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.221640 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l"] Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.223194 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.723172941 +0000 UTC m=+141.203536327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.226472 4982 csr.go:257] certificate signing request csr-q5xk9 is issued Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.247866 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" event={"ID":"cd5e5739-906e-4bfa-8ee9-521ddc8d22d8","Type":"ContainerStarted","Data":"082c305f5f426eb5deef4d6b633ff9f1bcc6e08c1d21ef313ba329dab8ae5b26"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.249150 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.250507 4982 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q7frt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.250544 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" podUID="cd5e5739-906e-4bfa-8ee9-521ddc8d22d8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.287297 4982 generic.go:334] "Generic (PLEG): container finished" podID="352ea8f0-9969-4a48-9b37-c77819f1473a" containerID="d133eddf8d76967008471bf89e16efc13c6a94a8e0ea83d23336c94d9b2b4205" exitCode=0 Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.287373 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" event={"ID":"352ea8f0-9969-4a48-9b37-c77819f1473a","Type":"ContainerDied","Data":"d133eddf8d76967008471bf89e16efc13c6a94a8e0ea83d23336c94d9b2b4205"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.315430 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sw7s7" event={"ID":"c32c238b-fa91-4017-b566-10cece74d21a","Type":"ContainerStarted","Data":"4489bbc6c5ba3e473f3039dc1923d20ecbcd0c21711969e7e627b091d4092988"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.317867 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" event={"ID":"57bc3220-bc9e-44ac-89a7-67b8996394f0","Type":"ContainerStarted","Data":"8a9ae624d4c30f2a6fabc15dda6513925128e65050605ccd078afb5d4a651a49"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.326669 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.328173 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.828156048 +0000 UTC m=+141.308519424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.350988 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" event={"ID":"6b7f7288-854e-407f-957d-7c4b0b0e4a46","Type":"ContainerStarted","Data":"fd9c9cf5d1d3c71a9954ae4c00c8c8e76ed3ede426f97bf9641f6ec49f7494ff"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.379896 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r2zpk" event={"ID":"c6073517-08b4-4946-b5e7-21aaaae62de3","Type":"ContainerStarted","Data":"51c8e9490cd67dd2027ff8c2b233a9b9f9502a589028f58f4cf48195847193d8"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.389561 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" event={"ID":"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7","Type":"ContainerStarted","Data":"a36ee85b06e1aa8893316e213ff0e65e78259b8db638b504764849d1be1f1052"} Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.405756 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.430289 4982 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hwqwn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.430384 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" podUID="27587190-e124-4217-94c7-13a44045143d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.433181 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.437466 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.93742336 +0000 UTC m=+141.417786746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.441213 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.451401 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.452130 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.453814 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:46.953753678 +0000 UTC m=+141.434117064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.568801 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.569084 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.069043511 +0000 UTC m=+141.549406897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.569534 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.569910 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.069895434 +0000 UTC m=+141.550258820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.639262 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" podStartSLOduration=121.639231504 podStartE2EDuration="2m1.639231504s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:46.638450973 +0000 UTC m=+141.118814359" watchObservedRunningTime="2026-01-23 07:57:46.639231504 +0000 UTC m=+141.119594890" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.667479 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" podStartSLOduration=121.667450841 podStartE2EDuration="2m1.667450841s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:46.665422407 +0000 UTC m=+141.145785793" watchObservedRunningTime="2026-01-23 07:57:46.667450841 +0000 UTC m=+141.147814217" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.670374 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.670775 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.17075629 +0000 UTC m=+141.651119676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.704137 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.713703 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:46 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:46 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:46 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.713764 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.729865 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" podStartSLOduration=121.729846795 podStartE2EDuration="2m1.729846795s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:46.728334475 +0000 UTC m=+141.208697861" watchObservedRunningTime="2026-01-23 07:57:46.729846795 +0000 UTC m=+141.210210181" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.772001 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.775009 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.274993727 +0000 UTC m=+141.755357103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.779581 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x9vz" podStartSLOduration=122.77956735 podStartE2EDuration="2m2.77956735s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:46.778981324 +0000 UTC m=+141.259344710" watchObservedRunningTime="2026-01-23 07:57:46.77956735 +0000 UTC m=+141.259930736" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.837025 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-sw7s7" podStartSLOduration=121.83698621 podStartE2EDuration="2m1.83698621s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:46.835086709 +0000 UTC m=+141.315450095" watchObservedRunningTime="2026-01-23 07:57:46.83698621 +0000 UTC m=+141.317349596" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.874058 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.874475 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.374458705 +0000 UTC m=+141.854822091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.952930 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" podStartSLOduration=122.95290309 podStartE2EDuration="2m2.95290309s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:46.945793009 +0000 UTC m=+141.426156395" watchObservedRunningTime="2026-01-23 07:57:46.95290309 +0000 UTC m=+141.433266476" Jan 23 07:57:46 crc kubenswrapper[4982]: I0123 07:57:46.975289 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:46 crc kubenswrapper[4982]: E0123 07:57:46.976692 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.476673688 +0000 UTC m=+141.957037074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.015981 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" podStartSLOduration=122.015958632 podStartE2EDuration="2m2.015958632s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:47.001327019 +0000 UTC m=+141.481690415" watchObservedRunningTime="2026-01-23 07:57:47.015958632 +0000 UTC m=+141.496322008" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.076488 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.077529 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.577506683 +0000 UTC m=+142.057870069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.187839 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.188251 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.688236034 +0000 UTC m=+142.168599430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.194286 4982 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vgf6m container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.194366 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" podUID="458b850a-fd8f-4b10-b504-76f0db87d7ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.230526 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 07:52:46 +0000 UTC, rotation deadline is 2026-10-26 11:28:36.973326418 +0000 UTC Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.230870 4982 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6627h30m49.74245954s for next certificate rotation Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.291320 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.291861 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.791841294 +0000 UTC m=+142.272204680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.397887 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.398260 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.898245149 +0000 UTC m=+142.378608535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.493898 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-594h7" event={"ID":"aa160e32-0794-4750-b720-d8b554208eb2","Type":"ContainerStarted","Data":"28eda995b0e39e2a77eeadc9684d10664de7f11f05b86043c7b8a4aed752d20b"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.499237 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.499931 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:47.999850195 +0000 UTC m=+142.480213581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.500316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.513746 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.013724597 +0000 UTC m=+142.494087983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.566192 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jzgnr" event={"ID":"22a8f798-f137-4e9b-b1ba-09f8a84179be","Type":"ContainerStarted","Data":"48e316ce5b5198c49e67a699966c5572224a5cb2f1a0a7c068372a37ea155871"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.611330 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" event={"ID":"e2a34d4c-ce3b-47ce-929d-c366188eca0a","Type":"ContainerStarted","Data":"727e97186c6d5fb0dbb1766bc2ac08cc5061a4285e63b0ee7bc92e491f2412c4"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.621912 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.622772 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.122749222 +0000 UTC m=+142.603112608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.654155 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jzgnr" podStartSLOduration=122.654129174 podStartE2EDuration="2m2.654129174s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:47.646598532 +0000 UTC m=+142.126961928" watchObservedRunningTime="2026-01-23 07:57:47.654129174 +0000 UTC m=+142.134492560" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.690223 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" event={"ID":"fc403e93-13e4-4220-9685-e3e23dd5ab3a","Type":"ContainerStarted","Data":"a79f7498e9411118e55647ac71681ea819ee35fc4c5986cfc3a833750657aedb"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.708861 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:47 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:47 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:47 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.708915 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.723333 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.723919 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.223903906 +0000 UTC m=+142.704267292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.724755 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4hr6k" event={"ID":"bd7a6af3-b71b-445f-81e6-4b47252f128d","Type":"ContainerStarted","Data":"8a393880050dfe05c11ae36ebcbbecc4238a06d3cd70aa32c17dd3368c80e2d1"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.752223 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2zsl7" event={"ID":"bdbf5600-4c0e-4816-94d5-b4960e3823e6","Type":"ContainerStarted","Data":"7df7cb1d276cf9f14a10795f769cbffea00470d27b0d80efb3f0990de5ac6450"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.808802 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" event={"ID":"8dfc137d-f565-4d31-8714-2d2343744701","Type":"ContainerStarted","Data":"ce5218f9f9272f56698266acac3be9722fd3a0d3dfe3bff13d4faacef7ce9d3b"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.824378 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.824691 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.32466286 +0000 UTC m=+142.805026246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.825434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.827674 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.32765907 +0000 UTC m=+142.808022456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.859342 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" event={"ID":"3d7b53f4-cc2b-4572-afee-60441a232155","Type":"ContainerStarted","Data":"2d26ab11cca23f54664a68d7e65fbe94f1510759c35a3d9cfaa35fbf61a154f8"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.859389 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" event={"ID":"3d7b53f4-cc2b-4572-afee-60441a232155","Type":"ContainerStarted","Data":"5f6f54b0ef21d64901db55ed75b76424cab138cef1d8c955d82e4473b28de3a6"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.889705 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" event={"ID":"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05","Type":"ContainerStarted","Data":"a22835fec1d683d9b0212f07ee273df4edf634946695870a61e7aef72fb90b03"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.891052 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" event={"ID":"474d5614-be8d-4336-ab5d-6b694e219975","Type":"ContainerStarted","Data":"2f53c7e6be45d41ae96f32c8133ddf3637273602cbede37109c0d5f64baab23c"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.898104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" event={"ID":"e93a65a5-a2ad-4e6d-8d3d-f07480db50e7","Type":"ContainerStarted","Data":"5dd15e75d6aa42738c7457671f4b55eca1c8030c9c9a6490cd4c5da1dbaf897d"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.916556 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" event={"ID":"2b3140c2-1f0b-44ac-ac93-b5862802c60b","Type":"ContainerStarted","Data":"0b8cd1324012fc29553279548e18dfa0ea0871566f69836f85366ba0de39b450"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.917252 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.918312 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2zsl7" podStartSLOduration=7.918301752 podStartE2EDuration="7.918301752s" podCreationTimestamp="2026-01-23 07:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:47.79782883 +0000 UTC m=+142.278192226" watchObservedRunningTime="2026-01-23 07:57:47.918301752 +0000 UTC m=+142.398665138" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.920361 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" podStartSLOduration=122.920355537 podStartE2EDuration="2m2.920355537s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:47.917193342 +0000 UTC m=+142.397556728" watchObservedRunningTime="2026-01-23 07:57:47.920355537 +0000 UTC m=+142.400718923" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.926213 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.925081 4982 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tjc5z container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.926535 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" podUID="2b3140c2-1f0b-44ac-ac93-b5862802c60b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 23 07:57:47 crc kubenswrapper[4982]: E0123 07:57:47.927094 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.427062787 +0000 UTC m=+142.907426173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.940460 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" event={"ID":"81bf1c62-20c1-4f84-b5ec-7756d496ae64","Type":"ContainerStarted","Data":"95f50df2745c35115c145ba4c8081d4f2b46b4a7fd936031e8b5c1d280c6d86c"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.959917 4982 generic.go:334] "Generic (PLEG): container finished" podID="8db4eb27-d67a-483f-8813-3921956ad64d" containerID="96f632571a55ddf7e8f275e487ad06063ef348d1ad73ac760023fd96043d40e7" exitCode=0 Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.960002 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" event={"ID":"8db4eb27-d67a-483f-8813-3921956ad64d","Type":"ContainerDied","Data":"96f632571a55ddf7e8f275e487ad06063ef348d1ad73ac760023fd96043d40e7"} Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.977496 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" podStartSLOduration=122.97747919 podStartE2EDuration="2m2.97747919s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:47.975764374 +0000 UTC m=+142.456127760" watchObservedRunningTime="2026-01-23 07:57:47.97747919 +0000 UTC m=+142.457842576" Jan 23 07:57:47 crc kubenswrapper[4982]: I0123 07:57:47.979463 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" event={"ID":"261667fa-341d-4eb6-a5db-14064cc71663","Type":"ContainerStarted","Data":"0ef2a58034c918393acc1a092073da1ddc0c3cb7e178a8a51fa3fb36ace840ec"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.032905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.034417 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.534398127 +0000 UTC m=+143.014761513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.038920 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" event={"ID":"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef","Type":"ContainerStarted","Data":"420d82d16393b96d366944fe37455fbf59201e2a61e974b72fbfc3a11c32dded"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.040773 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.048806 4982 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9rvcj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.048881 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" podUID="dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.058188 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gn9gk" podStartSLOduration=123.058162354 podStartE2EDuration="2m3.058162354s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.050375265 +0000 UTC m=+142.530738852" watchObservedRunningTime="2026-01-23 07:57:48.058162354 +0000 UTC m=+142.538525750" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.121537 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gjcvb" podStartSLOduration=123.121521964 podStartE2EDuration="2m3.121521964s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.11911165 +0000 UTC m=+142.599475036" watchObservedRunningTime="2026-01-23 07:57:48.121521964 +0000 UTC m=+142.601885350" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.143364 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.144743 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.644726177 +0000 UTC m=+143.125089563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.163505 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" event={"ID":"6c3f6594-0f93-41e5-89c4-96cc33522f7f","Type":"ContainerStarted","Data":"fc37d75164327ac74088a5d3d191b11cda04c9921c50cdeddd2ea4f980a74488"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.192730 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" podStartSLOduration=123.192698444 podStartE2EDuration="2m3.192698444s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.188796339 +0000 UTC m=+142.669159725" watchObservedRunningTime="2026-01-23 07:57:48.192698444 +0000 UTC m=+142.673061830" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.212830 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-r2zpk" event={"ID":"c6073517-08b4-4946-b5e7-21aaaae62de3","Type":"ContainerStarted","Data":"6f4821b10f0c74cd04db07e27dff0a5a93efe6999d4ad2997f540a151e1dc6b3"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.249500 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.250763 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.750746371 +0000 UTC m=+143.231109757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.255464 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qdpxs" event={"ID":"4b63c011-53bb-4dce-8db4-849770a043ba","Type":"ContainerStarted","Data":"80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.288686 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7j6f"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.289740 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.303582 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.333282 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" event={"ID":"5d6acb07-307a-4aa2-b316-c00e07ea41d9","Type":"ContainerStarted","Data":"cfca9d2790ade94189f35dcd775e401369b67d9c15db93fdd31aa11ef5086cee"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.356736 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.357144 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-utilities\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.357227 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf5k7\" (UniqueName: \"kubernetes.io/projected/5f5ba8a5-d454-48ef-936b-be7618f02695-kube-api-access-xf5k7\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.357265 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-catalog-content\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.384054 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.884007827 +0000 UTC m=+143.364371203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.397316 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7j6f"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.413029 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" podStartSLOduration=123.412997285 podStartE2EDuration="2m3.412997285s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.34835133 +0000 UTC m=+142.828714716" watchObservedRunningTime="2026-01-23 07:57:48.412997285 +0000 UTC m=+142.893360671" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.467180 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-utilities\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.467271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf5k7\" (UniqueName: \"kubernetes.io/projected/5f5ba8a5-d454-48ef-936b-be7618f02695-kube-api-access-xf5k7\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.467306 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-catalog-content\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.467402 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.468459 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:48.968416432 +0000 UTC m=+143.448779818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.469043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-catalog-content\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.479527 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-utilities\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.490424 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" event={"ID":"83a5ba89-988b-44ec-b584-d39bf9a8b0ea","Type":"ContainerStarted","Data":"21eb3043f83616ef0e87e1ca7a5c2a56aec8d9395fef0348b90c81b9d4eb9912"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.515833 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njdcr" podStartSLOduration=123.515814433 podStartE2EDuration="2m3.515814433s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.470778815 +0000 UTC m=+142.951142211" watchObservedRunningTime="2026-01-23 07:57:48.515814433 +0000 UTC m=+142.996177819" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.519914 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hljdw"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.535558 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf5k7\" (UniqueName: \"kubernetes.io/projected/5f5ba8a5-d454-48ef-936b-be7618f02695-kube-api-access-xf5k7\") pod \"community-operators-q7j6f\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.541012 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.546799 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.555025 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hljdw"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.564574 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" podStartSLOduration=123.564548741 podStartE2EDuration="2m3.564548741s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.519701618 +0000 UTC m=+143.000065024" watchObservedRunningTime="2026-01-23 07:57:48.564548741 +0000 UTC m=+143.044912127" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.571769 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.571858 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.071834616 +0000 UTC m=+143.552198002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.582449 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.582919 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.082903243 +0000 UTC m=+143.563266629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.585160 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5qgb4" podStartSLOduration=123.585132923 podStartE2EDuration="2m3.585132923s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.558373615 +0000 UTC m=+143.038737001" watchObservedRunningTime="2026-01-23 07:57:48.585132923 +0000 UTC m=+143.065496309" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.634752 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" event={"ID":"4963328d-9499-41bb-be08-0160ebc41b78","Type":"ContainerStarted","Data":"22450cb7db2127009873600b178a4beeb4a537c7253351fe6cfccb41498b21af"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.661273 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" event={"ID":"27587190-e124-4217-94c7-13a44045143d","Type":"ContainerStarted","Data":"5bfe8be9141f0897adda6da9036c6f726d54f43036fedf031d3e1e4f97ad69c3"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.663261 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xr2l"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.664557 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.683379 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.683828 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-utilities\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.683899 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-catalog-content\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.683968 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w7kc\" (UniqueName: \"kubernetes.io/projected/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-kube-api-access-5w7kc\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.684958 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.184939281 +0000 UTC m=+143.665302667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.687126 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-r2zpk" podStartSLOduration=8.687114479 podStartE2EDuration="8.687114479s" podCreationTimestamp="2026-01-23 07:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.682603898 +0000 UTC m=+143.162967284" watchObservedRunningTime="2026-01-23 07:57:48.687114479 +0000 UTC m=+143.167477865" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.687856 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xr2l"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.693968 4982 patch_prober.go:28] interesting pod/console-operator-58897d9998-vtfwd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.694058 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" podUID="fc403e93-13e4-4220-9685-e3e23dd5ab3a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.694539 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hwqwn" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.707943 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:48 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:48 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:48 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.708496 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.709220 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.719922 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" event={"ID":"7451eebf-43c1-46a4-bee2-24ce97bc0472","Type":"ContainerStarted","Data":"0323e4a9ab943e30cb8a546c2ae2bb50a2c4654a720103a39471669f4f454a78"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.740978 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" event={"ID":"01338d8d-7937-46ce-889e-5fae3c8152c2","Type":"ContainerStarted","Data":"b946456238e30f63f7425b36d2513773902d8ec2d38cc6924c9e222f65a82b8c"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.758268 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" event={"ID":"352ea8f0-9969-4a48-9b37-c77819f1473a","Type":"ContainerStarted","Data":"4383e426007c6fee2737fa76c5e570ecfb7261bb210bd89a1cfa2d38a93b5a46"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.759097 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.788722 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w7kc\" (UniqueName: \"kubernetes.io/projected/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-kube-api-access-5w7kc\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.788823 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.788864 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-utilities\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.788906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-catalog-content\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.788926 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-utilities\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.788959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-catalog-content\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.788986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5ht\" (UniqueName: \"kubernetes.io/projected/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-kube-api-access-cw5ht\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.789822 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-utilities\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.790957 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-catalog-content\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.791172 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.291154561 +0000 UTC m=+143.771517947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.832561 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" event={"ID":"0f80f80a-64b1-4eba-afd6-1ed4ec11d026","Type":"ContainerStarted","Data":"29ed22b6ccaf8ddba7a09b7a57ebd43e9dfb4e488b9f8dff94d3458ca18715fe"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.851799 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9pqmw"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.851937 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qdpxs" podStartSLOduration=123.851912431 podStartE2EDuration="2m3.851912431s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.833493017 +0000 UTC m=+143.313856403" watchObservedRunningTime="2026-01-23 07:57:48.851912431 +0000 UTC m=+143.332275807" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.853119 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.862486 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pqmw"] Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.871606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w7kc\" (UniqueName: \"kubernetes.io/projected/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-kube-api-access-5w7kc\") pod \"certified-operators-hljdw\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.889374 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-pqvqq" podStartSLOduration=123.889358936 podStartE2EDuration="2m3.889358936s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.887615499 +0000 UTC m=+143.367978885" watchObservedRunningTime="2026-01-23 07:57:48.889358936 +0000 UTC m=+143.369722322" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.890492 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.891102 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-catalog-content\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.891171 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-utilities\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.891228 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw5ht\" (UniqueName: \"kubernetes.io/projected/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-kube-api-access-cw5ht\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.892548 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-catalog-content\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: E0123 07:57:48.892689 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.392665814 +0000 UTC m=+143.873029200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.900643 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" event={"ID":"302fd8a4-7ced-4585-b5f7-15fa67d0d01e","Type":"ContainerStarted","Data":"6afac56e4912b9a8bb1a81af4140e7ffca75e4608e62f1968261b3deac5dd103"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.904043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-utilities\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.941436 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-svcz2" podStartSLOduration=124.941413072 podStartE2EDuration="2m4.941413072s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:48.940678042 +0000 UTC m=+143.421041428" watchObservedRunningTime="2026-01-23 07:57:48.941413072 +0000 UTC m=+143.421776448" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.944951 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw5ht\" (UniqueName: \"kubernetes.io/projected/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-kube-api-access-cw5ht\") pod \"community-operators-8xr2l\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.956078 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" event={"ID":"269ea902-7c9c-4223-acbe-31b0c705ef37","Type":"ContainerStarted","Data":"ec08ef5d2053d6a99e7921753d8aa9b3e0c77dd939ae307a2a4cf08097f3e2bf"} Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.968277 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.994164 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-catalog-content\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.994224 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2vjw\" (UniqueName: \"kubernetes.io/projected/19faede0-5207-4753-846b-4e4975ebc03a-kube-api-access-g2vjw\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.994318 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:48 crc kubenswrapper[4982]: I0123 07:57:48.994390 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-utilities\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.002397 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.502375218 +0000 UTC m=+143.982738604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.032742 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" event={"ID":"ee2e39ac-1eb4-4518-83cb-9331803431ca","Type":"ContainerStarted","Data":"55ba047f533b117102e0293860ee90bf5d59e610fced51ccb15ff80073faaf8f"} Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.055561 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.056382 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q7frt" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.080013 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.083409 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" podStartSLOduration=125.083383261 podStartE2EDuration="2m5.083383261s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:49.016954869 +0000 UTC m=+143.497318255" watchObservedRunningTime="2026-01-23 07:57:49.083383261 +0000 UTC m=+143.563746647" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.102319 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.102578 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-catalog-content\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.102645 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2vjw\" (UniqueName: \"kubernetes.io/projected/19faede0-5207-4753-846b-4e4975ebc03a-kube-api-access-g2vjw\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.103192 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-utilities\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.104477 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-catalog-content\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.104911 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.604862858 +0000 UTC m=+144.085226244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.107068 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-utilities\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.138825 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2vjw\" (UniqueName: \"kubernetes.io/projected/19faede0-5207-4753-846b-4e4975ebc03a-kube-api-access-g2vjw\") pod \"certified-operators-9pqmw\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.234869 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" podStartSLOduration=124.234839465 podStartE2EDuration="2m4.234839465s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:49.084886132 +0000 UTC m=+143.565249518" watchObservedRunningTime="2026-01-23 07:57:49.234839465 +0000 UTC m=+143.715202861" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.274522 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.274955 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.774940291 +0000 UTC m=+144.255303677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.311224 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.384377 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-lbdlv" podStartSLOduration=124.384353196 podStartE2EDuration="2m4.384353196s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:49.265438036 +0000 UTC m=+143.745801422" watchObservedRunningTime="2026-01-23 07:57:49.384353196 +0000 UTC m=+143.864716582" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.432647 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.433249 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:49.933221258 +0000 UTC m=+144.413584644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.540150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.540868 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.040855145 +0000 UTC m=+144.521218531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.617781 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" podStartSLOduration=125.617748778 podStartE2EDuration="2m5.617748778s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:49.593672382 +0000 UTC m=+144.074035778" watchObservedRunningTime="2026-01-23 07:57:49.617748778 +0000 UTC m=+144.098112164" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.638541 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" podStartSLOduration=125.638515666 podStartE2EDuration="2m5.638515666s" podCreationTimestamp="2026-01-23 07:55:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:49.625982109 +0000 UTC m=+144.106345515" watchObservedRunningTime="2026-01-23 07:57:49.638515666 +0000 UTC m=+144.118879052" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.641538 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.641984 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.141964598 +0000 UTC m=+144.622327984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.665377 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7j6f"] Jan 23 07:57:49 crc kubenswrapper[4982]: W0123 07:57:49.667515 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f5ba8a5_d454_48ef_936b_be7618f02695.slice/crio-59a08d4c1f3963628b5f164c56577c3be8ad33cb03a9118390ce4577dea32c6c WatchSource:0}: Error finding container 59a08d4c1f3963628b5f164c56577c3be8ad33cb03a9118390ce4577dea32c6c: Status 404 returned error can't find the container with id 59a08d4c1f3963628b5f164c56577c3be8ad33cb03a9118390ce4577dea32c6c Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.724025 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:49 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:49 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:49 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.724111 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.751048 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.751968 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.251950258 +0000 UTC m=+144.732313644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.798554 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8b4g2" podStartSLOduration=124.798529398 podStartE2EDuration="2m4.798529398s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:49.785781596 +0000 UTC m=+144.266144982" watchObservedRunningTime="2026-01-23 07:57:49.798529398 +0000 UTC m=+144.278892784" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.857098 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.858085 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.358063685 +0000 UTC m=+144.838427061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.858829 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9tg74" podStartSLOduration=124.858810085 podStartE2EDuration="2m4.858810085s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:49.858487936 +0000 UTC m=+144.338851312" watchObservedRunningTime="2026-01-23 07:57:49.858810085 +0000 UTC m=+144.339173471" Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.928429 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hljdw"] Jan 23 07:57:49 crc kubenswrapper[4982]: I0123 07:57:49.961233 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:49 crc kubenswrapper[4982]: E0123 07:57:49.961738 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.461718626 +0000 UTC m=+144.942082012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.068542 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.069498 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.569477927 +0000 UTC m=+145.049841313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.098718 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" event={"ID":"2b3140c2-1f0b-44ac-ac93-b5862802c60b","Type":"ContainerStarted","Data":"8838c2a46b804c4a6034f1f753257c15b057f89d4a55c83fe0673cb6c57410d1"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.151509 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" event={"ID":"e2a34d4c-ce3b-47ce-929d-c366188eca0a","Type":"ContainerStarted","Data":"186080d1ed4153e5c430b36716c150c31803d585f945b8d0be47d2c4eaff445f"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.151575 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" event={"ID":"e2a34d4c-ce3b-47ce-929d-c366188eca0a","Type":"ContainerStarted","Data":"cc1224d93877aa6e538ba2537b457e18cbae71aae1bb666a4e2a4e38f721862d"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.172137 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7j6f" event={"ID":"5f5ba8a5-d454-48ef-936b-be7618f02695","Type":"ContainerStarted","Data":"59a08d4c1f3963628b5f164c56577c3be8ad33cb03a9118390ce4577dea32c6c"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.173845 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.175122 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.675096171 +0000 UTC m=+145.155459557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.199554 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" event={"ID":"4963328d-9499-41bb-be08-0160ebc41b78","Type":"ContainerStarted","Data":"1ff335f07b44806b8d29da20d18c6c7f63d5013cb129741a642967f0ba88e2d3"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.199643 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" event={"ID":"4963328d-9499-41bb-be08-0160ebc41b78","Type":"ContainerStarted","Data":"db40c81f44fabd5357888243c270d1d59c3797c0b36cb755725fc98d75692950"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.222058 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" event={"ID":"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7","Type":"ContainerStarted","Data":"c3a97975fe617e4f4b7a6ce345a131c571bdd3f6f77e7995bb36a92f057343ab"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.223123 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tjc5z" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.227642 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-htgd9" podStartSLOduration=125.22760024 podStartE2EDuration="2m5.22760024s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:50.219571704 +0000 UTC m=+144.699935090" watchObservedRunningTime="2026-01-23 07:57:50.22760024 +0000 UTC m=+144.707963626" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.261306 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" event={"ID":"8db4eb27-d67a-483f-8813-3921956ad64d","Type":"ContainerStarted","Data":"597343beba38e294c1ba92bde106afc39b3ede3730f2a8566b3a358fec9563f2"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.275864 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.276029 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xr2l"] Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.276244 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.776222434 +0000 UTC m=+145.256585820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.301007 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xpksx" event={"ID":"6c3f6594-0f93-41e5-89c4-96cc33522f7f","Type":"ContainerStarted","Data":"d2f343505b3e257a18e7dd610fd0111f266e4a100f45793046deb7a52d85c757"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.356410 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6cpps" event={"ID":"01338d8d-7937-46ce-889e-5fae3c8152c2","Type":"ContainerStarted","Data":"be506ab59a1f2a5fec8c116fbb3016ef12742e2bdfc7754c0d71a84424ede16e"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.375327 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tzcmp" podStartSLOduration=125.375306593 podStartE2EDuration="2m5.375306593s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:50.296271752 +0000 UTC m=+144.776635138" watchObservedRunningTime="2026-01-23 07:57:50.375306593 +0000 UTC m=+144.855669979" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.389372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.391803 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.891782755 +0000 UTC m=+145.372146131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.444151 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" podStartSLOduration=125.444129469 podStartE2EDuration="2m5.444129469s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:50.375805946 +0000 UTC m=+144.856169332" watchObservedRunningTime="2026-01-23 07:57:50.444129469 +0000 UTC m=+144.924492865" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.460618 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-79l5k"] Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.462002 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.475060 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" event={"ID":"6b7f7288-854e-407f-957d-7c4b0b0e4a46","Type":"ContainerStarted","Data":"b05fd8eba1140cd4c747c363354ab75c66cd063c9a1831f66d6d239a424f7143"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.475131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" event={"ID":"6b7f7288-854e-407f-957d-7c4b0b0e4a46","Type":"ContainerStarted","Data":"03d705d8f7e2c9d5fc809f7e1d31f2d5f6343f40bb7ffb35ac5a0844e6b11306"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.475447 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.478663 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79l5k"] Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.497595 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.499346 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:50.99932627 +0000 UTC m=+145.479689646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.574096 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hljdw" event={"ID":"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc","Type":"ContainerStarted","Data":"52f02ccc7659a6d4010d0108797dad3d1950ee4ae5801dd7d7ea90bf740b7acd"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.597382 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cxmqc" podStartSLOduration=125.59734743 podStartE2EDuration="2m5.59734743s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:50.588841642 +0000 UTC m=+145.069205028" watchObservedRunningTime="2026-01-23 07:57:50.59734743 +0000 UTC m=+145.077710816" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.599591 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vpzr\" (UniqueName: \"kubernetes.io/projected/df2d9461-97c9-4460-84bf-538f6cd8a5ae-kube-api-access-8vpzr\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.599696 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.599737 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-utilities\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.599802 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-catalog-content\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.600288 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.100270018 +0000 UTC m=+145.580633404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.604512 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" event={"ID":"8dfc137d-f565-4d31-8714-2d2343744701","Type":"ContainerStarted","Data":"d26f220eb976965e5ccaf52855be20146ad9f11833dd2ff13e4761018779af35"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.604577 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" event={"ID":"8dfc137d-f565-4d31-8714-2d2343744701","Type":"ContainerStarted","Data":"1c50b24ef2c24c5b3dd212a3aa9aceb924128a59e8769fe7dc11b4c8379193db"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.610337 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.661233 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9pqmw"] Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.662201 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" podStartSLOduration=125.66218873 podStartE2EDuration="2m5.66218873s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:50.661344867 +0000 UTC m=+145.141708253" watchObservedRunningTime="2026-01-23 07:57:50.66218873 +0000 UTC m=+145.142552116" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.689204 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7dddz" event={"ID":"0f80f80a-64b1-4eba-afd6-1ed4ec11d026","Type":"ContainerStarted","Data":"ddc974e686c131cdcf98453c763cea4b397ed2571ecb29472762d099e9f0ec84"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.707468 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.708175 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.208155243 +0000 UTC m=+145.688518629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.708699 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vpzr\" (UniqueName: \"kubernetes.io/projected/df2d9461-97c9-4460-84bf-538f6cd8a5ae-kube-api-access-8vpzr\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.708874 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.709273 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-utilities\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.709412 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-catalog-content\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.709892 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.209883689 +0000 UTC m=+145.690247075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.711663 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-utilities\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.712014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-catalog-content\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.719717 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:50 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:50 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:50 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.719756 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.730234 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-594h7" event={"ID":"aa160e32-0794-4750-b720-d8b554208eb2","Type":"ContainerStarted","Data":"263a739452ae4dd8aad4b77b5852326c70cc69cdfd02e192f394917fd4c50389"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.730293 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-594h7" event={"ID":"aa160e32-0794-4750-b720-d8b554208eb2","Type":"ContainerStarted","Data":"afba70e3fb7f59941925e3240fb92dc4255e8758ec4db8fbc4bb76f1106067f6"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.731117 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-594h7" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.754337 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" event={"ID":"302fd8a4-7ced-4585-b5f7-15fa67d0d01e","Type":"ContainerStarted","Data":"ed2e7967bac61dc95bb90b32153371d528e5820dc57acb9f39489e1f49c0e86a"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.759392 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-594h7" podStartSLOduration=10.759370587 podStartE2EDuration="10.759370587s" podCreationTimestamp="2026-01-23 07:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:50.755887934 +0000 UTC m=+145.236251330" watchObservedRunningTime="2026-01-23 07:57:50.759370587 +0000 UTC m=+145.239733973" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.770280 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vpzr\" (UniqueName: \"kubernetes.io/projected/df2d9461-97c9-4460-84bf-538f6cd8a5ae-kube-api-access-8vpzr\") pod \"redhat-marketplace-79l5k\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.802961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-cw57s" event={"ID":"ea0c85a9-22f8-45d0-b22a-140fc3f0bb05","Type":"ContainerStarted","Data":"d55fd267b716cd6df0c360fa28b12ae6d65929f0ac7665dd8db229649b4488d6"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.810685 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.811583 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.311550847 +0000 UTC m=+145.791914233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.811927 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.812299 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.312290567 +0000 UTC m=+145.792653943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.822040 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.841911 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" event={"ID":"dabf56b7-6888-4d13-ad8d-d4d69016d37d","Type":"ContainerStarted","Data":"f72a088f2d5453b30ee95f4b4bc4f1ea4d798e8e975ced1ad1b6986783883442"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.842015 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2k6hw"] Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.850475 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" event={"ID":"dabf56b7-6888-4d13-ad8d-d4d69016d37d","Type":"ContainerStarted","Data":"ed40ccaaef724f9dd9538f8a051c108f7f2b184a5f26d676888ee5740bcdbcea"} Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.850554 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jzgnr" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.850591 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.850679 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.852376 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vtfwd" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.860772 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-jzgnr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.860836 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jzgnr" podUID="22a8f798-f137-4e9b-b1ba-09f8a84179be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.870056 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.881373 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2k6hw"] Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.901719 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-5z2hc" podStartSLOduration=125.901692326 podStartE2EDuration="2m5.901692326s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:50.895259583 +0000 UTC m=+145.375622969" watchObservedRunningTime="2026-01-23 07:57:50.901692326 +0000 UTC m=+145.382055732" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.931228 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.932520 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-catalog-content\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.932692 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdqk\" (UniqueName: \"kubernetes.io/projected/0dc556e8-7362-4cba-8d17-bf824323503b-kube-api-access-mfdqk\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:50 crc kubenswrapper[4982]: I0123 07:57:50.932982 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-utilities\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:50 crc kubenswrapper[4982]: E0123 07:57:50.933821 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.433788267 +0000 UTC m=+145.914151653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.038126 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-catalog-content\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.038713 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdqk\" (UniqueName: \"kubernetes.io/projected/0dc556e8-7362-4cba-8d17-bf824323503b-kube-api-access-mfdqk\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.038788 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-utilities\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.038869 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.039320 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.539304018 +0000 UTC m=+146.019667414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.040014 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-catalog-content\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.040596 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-utilities\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.084127 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdqk\" (UniqueName: \"kubernetes.io/projected/0dc556e8-7362-4cba-8d17-bf824323503b-kube-api-access-mfdqk\") pod \"redhat-marketplace-2k6hw\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.143609 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.144085 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.644065359 +0000 UTC m=+146.124428745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.206244 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.246022 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.246644 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.74659709 +0000 UTC m=+146.226960676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.248677 4982 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.348066 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.348492 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.848472753 +0000 UTC m=+146.328836129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.423732 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55vpx"] Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.430983 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.439409 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.452463 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55vpx"] Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.455071 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6k6\" (UniqueName: \"kubernetes.io/projected/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-kube-api-access-lh6k6\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.460214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-catalog-content\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.460335 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-utilities\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.460564 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.461222 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:51.961204837 +0000 UTC m=+146.441568223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.532667 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-79l5k"] Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.562296 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.562605 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh6k6\" (UniqueName: \"kubernetes.io/projected/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-kube-api-access-lh6k6\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.562688 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-catalog-content\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.562742 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-utilities\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.562786 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:52.062745852 +0000 UTC m=+146.543109238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.563053 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.563222 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-utilities\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.563448 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:52.06344096 +0000 UTC m=+146.543804346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.563874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-catalog-content\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.575918 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2k6hw"] Jan 23 07:57:51 crc kubenswrapper[4982]: W0123 07:57:51.581194 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dc556e8_7362_4cba_8d17_bf824323503b.slice/crio-4649c62a1e7cb603d5fa84a04a1dd6df0fc197dca80913441da50e1571b7e6cd WatchSource:0}: Error finding container 4649c62a1e7cb603d5fa84a04a1dd6df0fc197dca80913441da50e1571b7e6cd: Status 404 returned error can't find the container with id 4649c62a1e7cb603d5fa84a04a1dd6df0fc197dca80913441da50e1571b7e6cd Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.583426 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh6k6\" (UniqueName: \"kubernetes.io/projected/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-kube-api-access-lh6k6\") pod \"redhat-operators-55vpx\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.664748 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.665918 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:52.165895499 +0000 UTC m=+146.646258885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.702290 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:51 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:51 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:51 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.702349 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.756688 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.767376 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.767873 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:52.267852575 +0000 UTC m=+146.748215961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.813974 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wg2vk"] Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.815521 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.825792 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg2vk"] Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.864081 4982 generic.go:334] "Generic (PLEG): container finished" podID="0dc556e8-7362-4cba-8d17-bf824323503b" containerID="e47cd81cb1a62a6a099b44fefcbbcf1d2942058087a881ac6e234e037e31986e" exitCode=0 Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.864738 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2k6hw" event={"ID":"0dc556e8-7362-4cba-8d17-bf824323503b","Type":"ContainerDied","Data":"e47cd81cb1a62a6a099b44fefcbbcf1d2942058087a881ac6e234e037e31986e"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.864810 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2k6hw" event={"ID":"0dc556e8-7362-4cba-8d17-bf824323503b","Type":"ContainerStarted","Data":"4649c62a1e7cb603d5fa84a04a1dd6df0fc197dca80913441da50e1571b7e6cd"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.866781 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868227 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868515 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-utilities\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868634 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868661 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzww\" (UniqueName: \"kubernetes.io/projected/b6b04d65-1eea-4372-869c-30f41b5afb4c-kube-api-access-stzww\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868693 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868717 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868752 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.868773 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-catalog-content\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.868868 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:52.368851635 +0000 UTC m=+146.849215021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.870086 4982 generic.go:334] "Generic (PLEG): container finished" podID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerID="4f66ff2d217c61259651997d9f3b715fe21b5700cf4f6225ce40d4c96ef6f11d" exitCode=0 Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.870149 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xr2l" event={"ID":"507e0f16-b5ba-4a65-8f55-e0a66506f3e0","Type":"ContainerDied","Data":"4f66ff2d217c61259651997d9f3b715fe21b5700cf4f6225ce40d4c96ef6f11d"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.870179 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xr2l" event={"ID":"507e0f16-b5ba-4a65-8f55-e0a66506f3e0","Type":"ContainerStarted","Data":"3223a861fe76c3c7e48f427b77ca7b4524cff11914ad5a961c01b5fc660f6894"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.877391 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.883079 4982 generic.go:334] "Generic (PLEG): container finished" podID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerID="1c3f1366e6c8352d9f8d9353536b3c0488e738ef5cf3ffa56cecedfea262c1f7" exitCode=0 Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.883197 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79l5k" event={"ID":"df2d9461-97c9-4460-84bf-538f6cd8a5ae","Type":"ContainerDied","Data":"1c3f1366e6c8352d9f8d9353536b3c0488e738ef5cf3ffa56cecedfea262c1f7"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.883234 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79l5k" event={"ID":"df2d9461-97c9-4460-84bf-538f6cd8a5ae","Type":"ContainerStarted","Data":"16f2f4b3ae16445b16682dbf54cd05b5cae847a31a48ab865f14e1dfcf391cf6"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.883354 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.883920 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.885257 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.898770 4982 generic.go:334] "Generic (PLEG): container finished" podID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerID="db50853570d0e3b672d28fba73787e94994caa9ba26a2956ba73ca0ffa00c4a3" exitCode=0 Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.898860 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hljdw" event={"ID":"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc","Type":"ContainerDied","Data":"db50853570d0e3b672d28fba73787e94994caa9ba26a2956ba73ca0ffa00c4a3"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.915032 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerID="6549c9f06964e72a37ad46511a06bfc1da99e649ebb1c8ae86c3098b33daee59" exitCode=0 Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.915212 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7j6f" event={"ID":"5f5ba8a5-d454-48ef-936b-be7618f02695","Type":"ContainerDied","Data":"6549c9f06964e72a37ad46511a06bfc1da99e649ebb1c8ae86c3098b33daee59"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.923080 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" event={"ID":"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7","Type":"ContainerStarted","Data":"ec1a8ce69f27c6b99678a4380ee940b18a1f490568051120eaf4bb247cecfd6f"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.923133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" event={"ID":"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7","Type":"ContainerStarted","Data":"7ca5193a6e3daaa81b92fe1796014df02d65a12e9a976aa562dee0d27b151ba3"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.943211 4982 generic.go:334] "Generic (PLEG): container finished" podID="19faede0-5207-4753-846b-4e4975ebc03a" containerID="00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d" exitCode=0 Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.943904 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pqmw" event={"ID":"19faede0-5207-4753-846b-4e4975ebc03a","Type":"ContainerDied","Data":"00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.943970 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pqmw" event={"ID":"19faede0-5207-4753-846b-4e4975ebc03a","Type":"ContainerStarted","Data":"e8932e5bf3aeceeec9a2ac499ff638e9e26c47cfddb3c89e826dbe582dfdf657"} Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.951228 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.951594 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-jzgnr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.951655 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jzgnr" podUID="22a8f798-f137-4e9b-b1ba-09f8a84179be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.964077 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.970673 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.970754 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-catalog-content\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.970837 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-utilities\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.972423 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzww\" (UniqueName: \"kubernetes.io/projected/b6b04d65-1eea-4372-869c-30f41b5afb4c-kube-api-access-stzww\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.976712 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-catalog-content\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.977867 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-utilities\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:51 crc kubenswrapper[4982]: E0123 07:57:51.979478 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 07:57:52.479459372 +0000 UTC m=+146.959822758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6c2x9" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:51 crc kubenswrapper[4982]: I0123 07:57:51.982415 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.018790 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzww\" (UniqueName: \"kubernetes.io/projected/b6b04d65-1eea-4372-869c-30f41b5afb4c-kube-api-access-stzww\") pod \"redhat-operators-wg2vk\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.074940 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:52 crc kubenswrapper[4982]: E0123 07:57:52.075821 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 07:57:52.575792907 +0000 UTC m=+147.056156293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 07:57:52 crc kubenswrapper[4982]: W0123 07:57:52.110204 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b7ab81_833f_45c4_9e5a_7a1b2615de7e.slice/crio-b34b4e15aa9b66ab8a0a7ef63cfee6465dd117c6d44bb7a87b378f94d54f2c82 WatchSource:0}: Error finding container b34b4e15aa9b66ab8a0a7ef63cfee6465dd117c6d44bb7a87b378f94d54f2c82: Status 404 returned error can't find the container with id b34b4e15aa9b66ab8a0a7ef63cfee6465dd117c6d44bb7a87b378f94d54f2c82 Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.112683 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55vpx"] Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.147437 4982 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T07:57:51.248719076Z","Handler":null,"Name":""} Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.151173 4982 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.151248 4982 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.176518 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.180053 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.180090 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.219541 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.221585 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6c2x9\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.277587 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.308355 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.312357 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:52 crc kubenswrapper[4982]: W0123 07:57:52.409269 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-fd95d3a8527a3d37a5e8dc3eb3357de12d299e911caa6c500f9415533a99621b WatchSource:0}: Error finding container fd95d3a8527a3d37a5e8dc3eb3357de12d299e911caa6c500f9415533a99621b: Status 404 returned error can't find the container with id fd95d3a8527a3d37a5e8dc3eb3357de12d299e911caa6c500f9415533a99621b Jan 23 07:57:52 crc kubenswrapper[4982]: W0123 07:57:52.416935 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-6930b0be3680adf2f33bddd09144a480b29260095f3ba0d606b50affa7c3a1d8 WatchSource:0}: Error finding container 6930b0be3680adf2f33bddd09144a480b29260095f3ba0d606b50affa7c3a1d8: Status 404 returned error can't find the container with id 6930b0be3680adf2f33bddd09144a480b29260095f3ba0d606b50affa7c3a1d8 Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.561188 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wg2vk"] Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.634692 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6c2x9"] Jan 23 07:57:52 crc kubenswrapper[4982]: W0123 07:57:52.663014 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75c9ecd7_a4c2_4c6f_8cda_9c6e3b3b1b9f.slice/crio-fc8fe6bf96b5a8b40dcaddbc9d997ded4e2783fd4ae26017a179bd7b46f9fa78 WatchSource:0}: Error finding container fc8fe6bf96b5a8b40dcaddbc9d997ded4e2783fd4ae26017a179bd7b46f9fa78: Status 404 returned error can't find the container with id fc8fe6bf96b5a8b40dcaddbc9d997ded4e2783fd4ae26017a179bd7b46f9fa78 Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.690210 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.691399 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.694168 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.694512 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.708160 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:52 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:52 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:52 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.708256 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.717278 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.787213 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.787315 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.844376 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.844454 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.866506 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.889314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.889861 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.890065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:52 crc kubenswrapper[4982]: I0123 07:57:52.922288 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.010100 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d7b53f4-cc2b-4572-afee-60441a232155" containerID="2d26ab11cca23f54664a68d7e65fbe94f1510759c35a3d9cfaa35fbf61a154f8" exitCode=0 Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.010386 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" event={"ID":"3d7b53f4-cc2b-4572-afee-60441a232155","Type":"ContainerDied","Data":"2d26ab11cca23f54664a68d7e65fbe94f1510759c35a3d9cfaa35fbf61a154f8"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.057565 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" event={"ID":"3b6912da-3a32-4b32-89bf-ef8dc1a3fff7","Type":"ContainerStarted","Data":"bf2d602725e52cd231b655b143248bc44d698905e5ff778376f1dae25e501f75"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.088901 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" podStartSLOduration=13.088878538 podStartE2EDuration="13.088878538s" podCreationTimestamp="2026-01-23 07:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:53.087560243 +0000 UTC m=+147.567923629" watchObservedRunningTime="2026-01-23 07:57:53.088878538 +0000 UTC m=+147.569241924" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.102376 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg2vk" event={"ID":"b6b04d65-1eea-4372-869c-30f41b5afb4c","Type":"ContainerDied","Data":"08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.102223 4982 generic.go:334] "Generic (PLEG): container finished" podID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerID="08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421" exitCode=0 Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.103086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg2vk" event={"ID":"b6b04d65-1eea-4372-869c-30f41b5afb4c","Type":"ContainerStarted","Data":"69992218e9f32712fd7c6e9edc9705a0b86b2184a290b384549e70ad5fb1cc9b"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.116405 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bff4d9b1297bc52a67083b91f3c998ea1d2aa355e7e85dff97d340b84bbd7232"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.116459 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6930b0be3680adf2f33bddd09144a480b29260095f3ba0d606b50affa7c3a1d8"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.133433 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b0ab75c26612278ec68762709e5b4ef3e61c3b929676231621cba3f9f34260b6"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.133656 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ddbb60319499977f16e1401a486d64b225b44f182f7226a1181a01321be3550f"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.143238 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9bfb217a7025145721f507b1f3d3c4e0a60528c4592e293f6c1bc3c48845540d"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.143299 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fd95d3a8527a3d37a5e8dc3eb3357de12d299e911caa6c500f9415533a99621b"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.144012 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.149235 4982 generic.go:334] "Generic (PLEG): container finished" podID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerID="8081a691945e158fc6ac3c10b660d937c3cd43aefaff7b8ae89327aa4da6f5fa" exitCode=0 Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.149301 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55vpx" event={"ID":"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e","Type":"ContainerDied","Data":"8081a691945e158fc6ac3c10b660d937c3cd43aefaff7b8ae89327aa4da6f5fa"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.149329 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55vpx" event={"ID":"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e","Type":"ContainerStarted","Data":"b34b4e15aa9b66ab8a0a7ef63cfee6465dd117c6d44bb7a87b378f94d54f2c82"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.161680 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.171359 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" event={"ID":"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f","Type":"ContainerStarted","Data":"fc8fe6bf96b5a8b40dcaddbc9d997ded4e2783fd4ae26017a179bd7b46f9fa78"} Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.180065 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.191226 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hrg76" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.278231 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" podStartSLOduration=128.278193957 podStartE2EDuration="2m8.278193957s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:53.274041235 +0000 UTC m=+147.754404631" watchObservedRunningTime="2026-01-23 07:57:53.278193957 +0000 UTC m=+147.758557343" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.546635 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.546725 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.554754 4982 patch_prober.go:28] interesting pod/console-f9d7485db-qdpxs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.554834 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qdpxs" podUID="4b63c011-53bb-4dce-8db4-849770a043ba" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.564200 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.564283 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.589336 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-jzgnr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.589572 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.589804 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-jzgnr" podUID="22a8f798-f137-4e9b-b1ba-09f8a84179be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.589484 4982 patch_prober.go:28] interesting pod/downloads-7954f5f757-jzgnr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.589890 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jzgnr" podUID="22a8f798-f137-4e9b-b1ba-09f8a84179be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.698921 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.704926 4982 patch_prober.go:28] interesting pod/router-default-5444994796-sw7s7 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 07:57:53 crc kubenswrapper[4982]: [-]has-synced failed: reason withheld Jan 23 07:57:53 crc kubenswrapper[4982]: [+]process-running ok Jan 23 07:57:53 crc kubenswrapper[4982]: healthz check failed Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.705004 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sw7s7" podUID="c32c238b-fa91-4017-b566-10cece74d21a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.822075 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 07:57:53 crc kubenswrapper[4982]: I0123 07:57:53.854757 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.279687 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" event={"ID":"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f","Type":"ContainerStarted","Data":"a3aa3212599f88db990d91f467df6e5ed91b337af71408a781a831576ab18118"} Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.290414 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8e5bf86-cc9a-44f4-868d-74852cfe50dd","Type":"ContainerStarted","Data":"6f07770e06b58d7c2dd7ae51c69ca34abb109597ca96c9e47945aafbcd1b85c4"} Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.299051 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhcdz" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.706733 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.710986 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-sw7s7" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.755911 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.845589 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d7b53f4-cc2b-4572-afee-60441a232155-config-volume\") pod \"3d7b53f4-cc2b-4572-afee-60441a232155\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.845836 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d7b53f4-cc2b-4572-afee-60441a232155-secret-volume\") pod \"3d7b53f4-cc2b-4572-afee-60441a232155\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.845898 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md7bk\" (UniqueName: \"kubernetes.io/projected/3d7b53f4-cc2b-4572-afee-60441a232155-kube-api-access-md7bk\") pod \"3d7b53f4-cc2b-4572-afee-60441a232155\" (UID: \"3d7b53f4-cc2b-4572-afee-60441a232155\") " Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.849140 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d7b53f4-cc2b-4572-afee-60441a232155-config-volume" (OuterVolumeSpecName: "config-volume") pod "3d7b53f4-cc2b-4572-afee-60441a232155" (UID: "3d7b53f4-cc2b-4572-afee-60441a232155"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.871696 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d7b53f4-cc2b-4572-afee-60441a232155-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3d7b53f4-cc2b-4572-afee-60441a232155" (UID: "3d7b53f4-cc2b-4572-afee-60441a232155"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.872072 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d7b53f4-cc2b-4572-afee-60441a232155-kube-api-access-md7bk" (OuterVolumeSpecName: "kube-api-access-md7bk") pod "3d7b53f4-cc2b-4572-afee-60441a232155" (UID: "3d7b53f4-cc2b-4572-afee-60441a232155"). InnerVolumeSpecName "kube-api-access-md7bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.948023 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3d7b53f4-cc2b-4572-afee-60441a232155-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.948069 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md7bk\" (UniqueName: \"kubernetes.io/projected/3d7b53f4-cc2b-4572-afee-60441a232155-kube-api-access-md7bk\") on node \"crc\" DevicePath \"\"" Jan 23 07:57:54 crc kubenswrapper[4982]: I0123 07:57:54.948084 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d7b53f4-cc2b-4572-afee-60441a232155-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 07:57:55 crc kubenswrapper[4982]: I0123 07:57:55.318559 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" event={"ID":"3d7b53f4-cc2b-4572-afee-60441a232155","Type":"ContainerDied","Data":"5f6f54b0ef21d64901db55ed75b76424cab138cef1d8c955d82e4473b28de3a6"} Jan 23 07:57:55 crc kubenswrapper[4982]: I0123 07:57:55.318652 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f6f54b0ef21d64901db55ed75b76424cab138cef1d8c955d82e4473b28de3a6" Jan 23 07:57:55 crc kubenswrapper[4982]: I0123 07:57:55.318665 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk" Jan 23 07:57:55 crc kubenswrapper[4982]: I0123 07:57:55.345797 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8e5bf86-cc9a-44f4-868d-74852cfe50dd","Type":"ContainerStarted","Data":"6c97ebd75c2644f1a2fc645007ccca993945ca284fbed7d02f5e3e4d67c8aa07"} Jan 23 07:57:55 crc kubenswrapper[4982]: I0123 07:57:55.364406 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.364357869 podStartE2EDuration="3.364357869s" podCreationTimestamp="2026-01-23 07:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:57:55.362025196 +0000 UTC m=+149.842388582" watchObservedRunningTime="2026-01-23 07:57:55.364357869 +0000 UTC m=+149.844721245" Jan 23 07:57:56 crc kubenswrapper[4982]: I0123 07:57:56.371444 4982 generic.go:334] "Generic (PLEG): container finished" podID="f8e5bf86-cc9a-44f4-868d-74852cfe50dd" containerID="6c97ebd75c2644f1a2fc645007ccca993945ca284fbed7d02f5e3e4d67c8aa07" exitCode=0 Jan 23 07:57:56 crc kubenswrapper[4982]: I0123 07:57:56.371521 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8e5bf86-cc9a-44f4-868d-74852cfe50dd","Type":"ContainerDied","Data":"6c97ebd75c2644f1a2fc645007ccca993945ca284fbed7d02f5e3e4d67c8aa07"} Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.306064 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 07:57:57 crc kubenswrapper[4982]: E0123 07:57:57.307667 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d7b53f4-cc2b-4572-afee-60441a232155" containerName="collect-profiles" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.307728 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d7b53f4-cc2b-4572-afee-60441a232155" containerName="collect-profiles" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.307884 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d7b53f4-cc2b-4572-afee-60441a232155" containerName="collect-profiles" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.308529 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.314208 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.314586 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.346608 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.393548 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.393687 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.503535 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.503666 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.504318 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.542314 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.642658 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:57:57 crc kubenswrapper[4982]: I0123 07:57:57.874102 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.019104 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kube-api-access\") pod \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.019248 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kubelet-dir\") pod \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\" (UID: \"f8e5bf86-cc9a-44f4-868d-74852cfe50dd\") " Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.019408 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f8e5bf86-cc9a-44f4-868d-74852cfe50dd" (UID: "f8e5bf86-cc9a-44f4-868d-74852cfe50dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.020013 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.023650 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f8e5bf86-cc9a-44f4-868d-74852cfe50dd" (UID: "f8e5bf86-cc9a-44f4-868d-74852cfe50dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.104681 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.122042 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8e5bf86-cc9a-44f4-868d-74852cfe50dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:57:58 crc kubenswrapper[4982]: W0123 07:57:58.130943 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0d9dd776_03f7_489b_bda6_0cb201c6bd08.slice/crio-80e06bdcdd8b8e9649d87cc4b6d865e5fb8ae42f77b77f32e1366319f636a0b1 WatchSource:0}: Error finding container 80e06bdcdd8b8e9649d87cc4b6d865e5fb8ae42f77b77f32e1366319f636a0b1: Status 404 returned error can't find the container with id 80e06bdcdd8b8e9649d87cc4b6d865e5fb8ae42f77b77f32e1366319f636a0b1 Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.417495 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0d9dd776-03f7-489b-bda6-0cb201c6bd08","Type":"ContainerStarted","Data":"80e06bdcdd8b8e9649d87cc4b6d865e5fb8ae42f77b77f32e1366319f636a0b1"} Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.420229 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8e5bf86-cc9a-44f4-868d-74852cfe50dd","Type":"ContainerDied","Data":"6f07770e06b58d7c2dd7ae51c69ca34abb109597ca96c9e47945aafbcd1b85c4"} Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.420285 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f07770e06b58d7c2dd7ae51c69ca34abb109597ca96c9e47945aafbcd1b85c4" Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.420325 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 07:57:58 crc kubenswrapper[4982]: I0123 07:57:58.835257 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-594h7" Jan 23 07:58:00 crc kubenswrapper[4982]: I0123 07:58:00.458326 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0d9dd776-03f7-489b-bda6-0cb201c6bd08","Type":"ContainerStarted","Data":"a2ad12df319347b7a00c42185244fe5a494ebbe8fc7bcd6e72f5effae32bc124"} Jan 23 07:58:00 crc kubenswrapper[4982]: I0123 07:58:00.477572 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.477551044 podStartE2EDuration="3.477551044s" podCreationTimestamp="2026-01-23 07:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:00.475820868 +0000 UTC m=+154.956184254" watchObservedRunningTime="2026-01-23 07:58:00.477551044 +0000 UTC m=+154.957914430" Jan 23 07:58:01 crc kubenswrapper[4982]: I0123 07:58:01.472268 4982 generic.go:334] "Generic (PLEG): container finished" podID="0d9dd776-03f7-489b-bda6-0cb201c6bd08" containerID="a2ad12df319347b7a00c42185244fe5a494ebbe8fc7bcd6e72f5effae32bc124" exitCode=0 Jan 23 07:58:01 crc kubenswrapper[4982]: I0123 07:58:01.472321 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0d9dd776-03f7-489b-bda6-0cb201c6bd08","Type":"ContainerDied","Data":"a2ad12df319347b7a00c42185244fe5a494ebbe8fc7bcd6e72f5effae32bc124"} Jan 23 07:58:03 crc kubenswrapper[4982]: I0123 07:58:03.600574 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jzgnr" Jan 23 07:58:03 crc kubenswrapper[4982]: I0123 07:58:03.602914 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:58:03 crc kubenswrapper[4982]: I0123 07:58:03.608024 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 07:58:05 crc kubenswrapper[4982]: I0123 07:58:05.539757 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nc4fw"] Jan 23 07:58:05 crc kubenswrapper[4982]: I0123 07:58:05.540476 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerName="controller-manager" containerID="cri-o://d801644df823e0701aa8982ce672994ed9c338584d654bb21f040af0184ad505" gracePeriod=30 Jan 23 07:58:05 crc kubenswrapper[4982]: I0123 07:58:05.546000 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k"] Jan 23 07:58:05 crc kubenswrapper[4982]: I0123 07:58:05.546276 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" containerName="route-controller-manager" containerID="cri-o://6e9bcc28ef8eb8e648a78a0070c25824bb20955ccfaa1270abbabc4c44b83d4b" gracePeriod=30 Jan 23 07:58:06 crc kubenswrapper[4982]: I0123 07:58:06.563361 4982 generic.go:334] "Generic (PLEG): container finished" podID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerID="d801644df823e0701aa8982ce672994ed9c338584d654bb21f040af0184ad505" exitCode=0 Jan 23 07:58:06 crc kubenswrapper[4982]: I0123 07:58:06.563483 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" event={"ID":"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e","Type":"ContainerDied","Data":"d801644df823e0701aa8982ce672994ed9c338584d654bb21f040af0184ad505"} Jan 23 07:58:06 crc kubenswrapper[4982]: I0123 07:58:06.566788 4982 generic.go:334] "Generic (PLEG): container finished" podID="2b188236-85a4-405c-9651-ba4b15dc4956" containerID="6e9bcc28ef8eb8e648a78a0070c25824bb20955ccfaa1270abbabc4c44b83d4b" exitCode=0 Jan 23 07:58:06 crc kubenswrapper[4982]: I0123 07:58:06.566841 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" event={"ID":"2b188236-85a4-405c-9651-ba4b15dc4956","Type":"ContainerDied","Data":"6e9bcc28ef8eb8e648a78a0070c25824bb20955ccfaa1270abbabc4c44b83d4b"} Jan 23 07:58:07 crc kubenswrapper[4982]: I0123 07:58:07.931814 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:58:07 crc kubenswrapper[4982]: I0123 07:58:07.938339 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0efe4bf7-1550-4885-a344-d045b3a1c653-metrics-certs\") pod \"network-metrics-daemon-xxghm\" (UID: \"0efe4bf7-1550-4885-a344-d045b3a1c653\") " pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:58:08 crc kubenswrapper[4982]: I0123 07:58:08.173289 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xxghm" Jan 23 07:58:11 crc kubenswrapper[4982]: I0123 07:58:11.435780 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:58:11 crc kubenswrapper[4982]: I0123 07:58:11.435853 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:58:12 crc kubenswrapper[4982]: I0123 07:58:12.323852 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 07:58:12 crc kubenswrapper[4982]: I0123 07:58:12.818966 4982 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nc4fw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 23 07:58:12 crc kubenswrapper[4982]: I0123 07:58:12.819024 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 23 07:58:12 crc kubenswrapper[4982]: I0123 07:58:12.909947 4982 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lrw5k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 07:58:12 crc kubenswrapper[4982]: I0123 07:58:12.910028 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 07:58:17 crc kubenswrapper[4982]: I0123 07:58:17.695774 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:58:17 crc kubenswrapper[4982]: I0123 07:58:17.871085 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kube-api-access\") pod \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " Jan 23 07:58:17 crc kubenswrapper[4982]: I0123 07:58:17.871507 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kubelet-dir\") pod \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\" (UID: \"0d9dd776-03f7-489b-bda6-0cb201c6bd08\") " Jan 23 07:58:17 crc kubenswrapper[4982]: I0123 07:58:17.871574 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d9dd776-03f7-489b-bda6-0cb201c6bd08" (UID: "0d9dd776-03f7-489b-bda6-0cb201c6bd08"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:58:17 crc kubenswrapper[4982]: I0123 07:58:17.871753 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:17 crc kubenswrapper[4982]: I0123 07:58:17.877799 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d9dd776-03f7-489b-bda6-0cb201c6bd08" (UID: "0d9dd776-03f7-489b-bda6-0cb201c6bd08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:17 crc kubenswrapper[4982]: I0123 07:58:17.972225 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d9dd776-03f7-489b-bda6-0cb201c6bd08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:18 crc kubenswrapper[4982]: I0123 07:58:18.639324 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0d9dd776-03f7-489b-bda6-0cb201c6bd08","Type":"ContainerDied","Data":"80e06bdcdd8b8e9649d87cc4b6d865e5fb8ae42f77b77f32e1366319f636a0b1"} Jan 23 07:58:18 crc kubenswrapper[4982]: I0123 07:58:18.639848 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e06bdcdd8b8e9649d87cc4b6d865e5fb8ae42f77b77f32e1366319f636a0b1" Jan 23 07:58:18 crc kubenswrapper[4982]: I0123 07:58:18.639928 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 07:58:23 crc kubenswrapper[4982]: I0123 07:58:23.819280 4982 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nc4fw container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 07:58:23 crc kubenswrapper[4982]: I0123 07:58:23.820007 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 07:58:23 crc kubenswrapper[4982]: I0123 07:58:23.910003 4982 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lrw5k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: i/o timeout" start-of-body= Jan 23 07:58:23 crc kubenswrapper[4982]: I0123 07:58:23.910092 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: i/o timeout" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.117547 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pc68l" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.139378 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.167333 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.169798 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f866847d-tl728"] Jan 23 07:58:24 crc kubenswrapper[4982]: E0123 07:58:24.170080 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e5bf86-cc9a-44f4-868d-74852cfe50dd" containerName="pruner" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170112 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e5bf86-cc9a-44f4-868d-74852cfe50dd" containerName="pruner" Jan 23 07:58:24 crc kubenswrapper[4982]: E0123 07:58:24.170128 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerName="controller-manager" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170136 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerName="controller-manager" Jan 23 07:58:24 crc kubenswrapper[4982]: E0123 07:58:24.170151 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" containerName="route-controller-manager" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170161 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" containerName="route-controller-manager" Jan 23 07:58:24 crc kubenswrapper[4982]: E0123 07:58:24.170179 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9dd776-03f7-489b-bda6-0cb201c6bd08" containerName="pruner" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170188 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9dd776-03f7-489b-bda6-0cb201c6bd08" containerName="pruner" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170316 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9dd776-03f7-489b-bda6-0cb201c6bd08" containerName="pruner" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170330 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e5bf86-cc9a-44f4-868d-74852cfe50dd" containerName="pruner" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170347 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" containerName="route-controller-manager" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.170357 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" containerName="controller-manager" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.171173 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.179166 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f866847d-tl728"] Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201128 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pkc5\" (UniqueName: \"kubernetes.io/projected/2b188236-85a4-405c-9651-ba4b15dc4956-kube-api-access-2pkc5\") pod \"2b188236-85a4-405c-9651-ba4b15dc4956\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201168 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-config\") pod \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201222 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-serving-cert\") pod \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201261 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b188236-85a4-405c-9651-ba4b15dc4956-serving-cert\") pod \"2b188236-85a4-405c-9651-ba4b15dc4956\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201285 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-proxy-ca-bundles\") pod \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201321 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wwhk\" (UniqueName: \"kubernetes.io/projected/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-kube-api-access-5wwhk\") pod \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201357 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-client-ca\") pod \"2b188236-85a4-405c-9651-ba4b15dc4956\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201392 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-config\") pod \"2b188236-85a4-405c-9651-ba4b15dc4956\" (UID: \"2b188236-85a4-405c-9651-ba4b15dc4956\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-client-ca\") pod \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\" (UID: \"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e\") " Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201637 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-serving-cert\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-client-ca\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201758 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-config\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.201792 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5lhx\" (UniqueName: \"kubernetes.io/projected/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-kube-api-access-z5lhx\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.202381 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-client-ca" (OuterVolumeSpecName: "client-ca") pod "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" (UID: "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.202892 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-config" (OuterVolumeSpecName: "config") pod "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" (UID: "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.203218 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b188236-85a4-405c-9651-ba4b15dc4956" (UID: "2b188236-85a4-405c-9651-ba4b15dc4956"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.203279 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-config" (OuterVolumeSpecName: "config") pod "2b188236-85a4-405c-9651-ba4b15dc4956" (UID: "2b188236-85a4-405c-9651-ba4b15dc4956"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.203471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" (UID: "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.219274 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" (UID: "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.220156 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b188236-85a4-405c-9651-ba4b15dc4956-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b188236-85a4-405c-9651-ba4b15dc4956" (UID: "2b188236-85a4-405c-9651-ba4b15dc4956"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.220641 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-kube-api-access-5wwhk" (OuterVolumeSpecName: "kube-api-access-5wwhk") pod "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" (UID: "9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e"). InnerVolumeSpecName "kube-api-access-5wwhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.221301 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b188236-85a4-405c-9651-ba4b15dc4956-kube-api-access-2pkc5" (OuterVolumeSpecName: "kube-api-access-2pkc5") pod "2b188236-85a4-405c-9651-ba4b15dc4956" (UID: "2b188236-85a4-405c-9651-ba4b15dc4956"). InnerVolumeSpecName "kube-api-access-2pkc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302527 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5lhx\" (UniqueName: \"kubernetes.io/projected/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-kube-api-access-z5lhx\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302613 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-serving-cert\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302682 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-client-ca\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302726 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-config\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302795 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pkc5\" (UniqueName: \"kubernetes.io/projected/2b188236-85a4-405c-9651-ba4b15dc4956-kube-api-access-2pkc5\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302812 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302823 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302835 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b188236-85a4-405c-9651-ba4b15dc4956-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302843 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302852 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wwhk\" (UniqueName: \"kubernetes.io/projected/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-kube-api-access-5wwhk\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302860 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302868 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b188236-85a4-405c-9651-ba4b15dc4956-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.302877 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.304104 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-config\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.305127 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-client-ca\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.307678 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-serving-cert\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.321711 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5lhx\" (UniqueName: \"kubernetes.io/projected/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-kube-api-access-z5lhx\") pod \"route-controller-manager-f866847d-tl728\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.493933 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.669177 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.669213 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nc4fw" event={"ID":"9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e","Type":"ContainerDied","Data":"0a7640133b6c8148ca8e27488608c173c7e250e10a078d9f1d5db1a9a22bc951"} Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.669275 4982 scope.go:117] "RemoveContainer" containerID="d801644df823e0701aa8982ce672994ed9c338584d654bb21f040af0184ad505" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.672879 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" event={"ID":"2b188236-85a4-405c-9651-ba4b15dc4956","Type":"ContainerDied","Data":"7ef1ff2b7b9d001629bbc249ea7181de923a3b463a10c04d412dd23cac6f0005"} Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.672948 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k" Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.711089 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k"] Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.719492 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lrw5k"] Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.723328 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nc4fw"] Jan 23 07:58:24 crc kubenswrapper[4982]: I0123 07:58:24.738809 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nc4fw"] Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.497099 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5464cb48c7-p54m6"] Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.497793 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.502245 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.502434 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.502473 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.503760 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.503798 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.503977 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.509301 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.510197 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5464cb48c7-p54m6"] Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.516444 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-serving-cert\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.516498 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-proxy-ca-bundles\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.516521 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jjp\" (UniqueName: \"kubernetes.io/projected/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-kube-api-access-l4jjp\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.516552 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-config\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.516570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-client-ca\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.577643 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f866847d-tl728"] Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.618859 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-serving-cert\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.618932 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-proxy-ca-bundles\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.618966 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jjp\" (UniqueName: \"kubernetes.io/projected/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-kube-api-access-l4jjp\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.619005 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-config\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.619036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-client-ca\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.620104 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-proxy-ca-bundles\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.620187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-client-ca\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.620648 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-config\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.623763 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-serving-cert\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.643975 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jjp\" (UniqueName: \"kubernetes.io/projected/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-kube-api-access-l4jjp\") pod \"controller-manager-5464cb48c7-p54m6\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.824028 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.831565 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.837343 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b188236-85a4-405c-9651-ba4b15dc4956" path="/var/lib/kubelet/pods/2b188236-85a4-405c-9651-ba4b15dc4956/volumes" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.837931 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e" path="/var/lib/kubelet/pods/9c271848-a6a6-4ca1-8ee6-fd4fa7b1a83e/volumes" Jan 23 07:58:25 crc kubenswrapper[4982]: E0123 07:58:25.863589 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 07:58:25 crc kubenswrapper[4982]: E0123 07:58:25.863898 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2vjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9pqmw_openshift-marketplace(19faede0-5207-4753-846b-4e4975ebc03a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 07:58:25 crc kubenswrapper[4982]: E0123 07:58:25.865201 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9pqmw" podUID="19faede0-5207-4753-846b-4e4975ebc03a" Jan 23 07:58:25 crc kubenswrapper[4982]: I0123 07:58:25.898049 4982 scope.go:117] "RemoveContainer" containerID="6e9bcc28ef8eb8e648a78a0070c25824bb20955ccfaa1270abbabc4c44b83d4b" Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.015470 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xxghm"] Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.214474 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5464cb48c7-p54m6"] Jan 23 07:58:26 crc kubenswrapper[4982]: W0123 07:58:26.221112 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2fa6ae5_3d73_4889_be9e_eaf022ddaf1b.slice/crio-b511ff3ca2587ede40c3d2c70cb7caaaf755c2f27fd0923699b0eb58752dfc9f WatchSource:0}: Error finding container b511ff3ca2587ede40c3d2c70cb7caaaf755c2f27fd0923699b0eb58752dfc9f: Status 404 returned error can't find the container with id b511ff3ca2587ede40c3d2c70cb7caaaf755c2f27fd0923699b0eb58752dfc9f Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.371931 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f866847d-tl728"] Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.694433 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" event={"ID":"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b","Type":"ContainerStarted","Data":"b511ff3ca2587ede40c3d2c70cb7caaaf755c2f27fd0923699b0eb58752dfc9f"} Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.698472 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79l5k" event={"ID":"df2d9461-97c9-4460-84bf-538f6cd8a5ae","Type":"ContainerStarted","Data":"58b451ec9b93aab66e62b8436590fa0b579b52724024493f94564a76124693c6"} Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.699519 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" event={"ID":"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed","Type":"ContainerStarted","Data":"cd26099ad081e17e920531a4a148db584e6360e9d10b10198c1c457279bdc0c6"} Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.700783 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xxghm" event={"ID":"0efe4bf7-1550-4885-a344-d045b3a1c653","Type":"ContainerStarted","Data":"d22489d618dbbaf0418deed5fed1d49178d07970b9c80bd8f382d8bd3e583e61"} Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.707478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg2vk" event={"ID":"b6b04d65-1eea-4372-869c-30f41b5afb4c","Type":"ContainerStarted","Data":"ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52"} Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.709756 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xr2l" event={"ID":"507e0f16-b5ba-4a65-8f55-e0a66506f3e0","Type":"ContainerStarted","Data":"ff66d1b877d470648e2845b4e51485351dbbf3009a5af77438c34bd9f7504956"} Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.727555 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hljdw" event={"ID":"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc","Type":"ContainerStarted","Data":"20af36577fe950b992090d8dfa21a63742d75d007f577d8fd471d05006fe56c2"} Jan 23 07:58:26 crc kubenswrapper[4982]: I0123 07:58:26.737202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7j6f" event={"ID":"5f5ba8a5-d454-48ef-936b-be7618f02695","Type":"ContainerStarted","Data":"032e5013b9a0053ea75fab6088f08c5974b4208abe52176d5c13c249645b8714"} Jan 23 07:58:26 crc kubenswrapper[4982]: E0123 07:58:26.737990 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9pqmw" podUID="19faede0-5207-4753-846b-4e4975ebc03a" Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.745442 4982 generic.go:334] "Generic (PLEG): container finished" podID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerID="ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52" exitCode=0 Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.747077 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg2vk" event={"ID":"b6b04d65-1eea-4372-869c-30f41b5afb4c","Type":"ContainerDied","Data":"ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.748653 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xxghm" event={"ID":"0efe4bf7-1550-4885-a344-d045b3a1c653","Type":"ContainerStarted","Data":"3490c6395a64178cd5e376a9a51ad93e435341cfcca0dad4f5d272792e4860fa"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.751741 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" event={"ID":"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b","Type":"ContainerStarted","Data":"17cf6e260143ad6250597edf70c79e4076420af3f3888ffa058cb8055616167b"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.752399 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.753316 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" event={"ID":"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed","Type":"ContainerStarted","Data":"42dac80c6d0f9d3631f18f3ec0c80813cc5c9caa5e3ac76b641d3753b06f5415"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.753468 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" podUID="9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" containerName="route-controller-manager" containerID="cri-o://42dac80c6d0f9d3631f18f3ec0c80813cc5c9caa5e3ac76b641d3753b06f5415" gracePeriod=30 Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.753946 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.755974 4982 generic.go:334] "Generic (PLEG): container finished" podID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerID="20af36577fe950b992090d8dfa21a63742d75d007f577d8fd471d05006fe56c2" exitCode=0 Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.756288 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hljdw" event={"ID":"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc","Type":"ContainerDied","Data":"20af36577fe950b992090d8dfa21a63742d75d007f577d8fd471d05006fe56c2"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.758109 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.759518 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerID="032e5013b9a0053ea75fab6088f08c5974b4208abe52176d5c13c249645b8714" exitCode=0 Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.759664 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7j6f" event={"ID":"5f5ba8a5-d454-48ef-936b-be7618f02695","Type":"ContainerDied","Data":"032e5013b9a0053ea75fab6088f08c5974b4208abe52176d5c13c249645b8714"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.761057 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.774331 4982 generic.go:334] "Generic (PLEG): container finished" podID="0dc556e8-7362-4cba-8d17-bf824323503b" containerID="8e580b9bf3db19d4926d3108e5a55ca8881e4b8755cbd7b6d5657a69e14e2e2e" exitCode=0 Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.774412 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2k6hw" event={"ID":"0dc556e8-7362-4cba-8d17-bf824323503b","Type":"ContainerDied","Data":"8e580b9bf3db19d4926d3108e5a55ca8881e4b8755cbd7b6d5657a69e14e2e2e"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.777176 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55vpx" event={"ID":"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e","Type":"ContainerStarted","Data":"2c0b6b2823361f78495cad93e4b1a0d16ddf6e0416aec38ca44278427e6236ad"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.779495 4982 generic.go:334] "Generic (PLEG): container finished" podID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerID="ff66d1b877d470648e2845b4e51485351dbbf3009a5af77438c34bd9f7504956" exitCode=0 Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.779549 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xr2l" event={"ID":"507e0f16-b5ba-4a65-8f55-e0a66506f3e0","Type":"ContainerDied","Data":"ff66d1b877d470648e2845b4e51485351dbbf3009a5af77438c34bd9f7504956"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.780964 4982 generic.go:334] "Generic (PLEG): container finished" podID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerID="58b451ec9b93aab66e62b8436590fa0b579b52724024493f94564a76124693c6" exitCode=0 Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.780992 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79l5k" event={"ID":"df2d9461-97c9-4460-84bf-538f6cd8a5ae","Type":"ContainerDied","Data":"58b451ec9b93aab66e62b8436590fa0b579b52724024493f94564a76124693c6"} Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.810195 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" podStartSLOduration=22.810176946 podStartE2EDuration="22.810176946s" podCreationTimestamp="2026-01-23 07:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:27.809718864 +0000 UTC m=+182.290082250" watchObservedRunningTime="2026-01-23 07:58:27.810176946 +0000 UTC m=+182.290540352" Jan 23 07:58:27 crc kubenswrapper[4982]: I0123 07:58:27.859377 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" podStartSLOduration=2.859355596 podStartE2EDuration="2.859355596s" podCreationTimestamp="2026-01-23 07:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:27.852479051 +0000 UTC m=+182.332842437" watchObservedRunningTime="2026-01-23 07:58:27.859355596 +0000 UTC m=+182.339718982" Jan 23 07:58:28 crc kubenswrapper[4982]: I0123 07:58:28.790900 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xxghm" event={"ID":"0efe4bf7-1550-4885-a344-d045b3a1c653","Type":"ContainerStarted","Data":"7f2a5a173bc717343d81025ad7eb717ff1ab0a0c13d1e431d1d5a392d4c34e61"} Jan 23 07:58:28 crc kubenswrapper[4982]: I0123 07:58:28.793823 4982 generic.go:334] "Generic (PLEG): container finished" podID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerID="2c0b6b2823361f78495cad93e4b1a0d16ddf6e0416aec38ca44278427e6236ad" exitCode=0 Jan 23 07:58:28 crc kubenswrapper[4982]: I0123 07:58:28.793905 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55vpx" event={"ID":"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e","Type":"ContainerDied","Data":"2c0b6b2823361f78495cad93e4b1a0d16ddf6e0416aec38ca44278427e6236ad"} Jan 23 07:58:28 crc kubenswrapper[4982]: I0123 07:58:28.795991 4982 generic.go:334] "Generic (PLEG): container finished" podID="9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" containerID="42dac80c6d0f9d3631f18f3ec0c80813cc5c9caa5e3ac76b641d3753b06f5415" exitCode=0 Jan 23 07:58:28 crc kubenswrapper[4982]: I0123 07:58:28.796512 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" event={"ID":"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed","Type":"ContainerDied","Data":"42dac80c6d0f9d3631f18f3ec0c80813cc5c9caa5e3ac76b641d3753b06f5415"} Jan 23 07:58:28 crc kubenswrapper[4982]: I0123 07:58:28.813127 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xxghm" podStartSLOduration=163.813092445 podStartE2EDuration="2m43.813092445s" podCreationTimestamp="2026-01-23 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:28.807929676 +0000 UTC m=+183.288293082" watchObservedRunningTime="2026-01-23 07:58:28.813092445 +0000 UTC m=+183.293455831" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.333095 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.356485 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5lhx\" (UniqueName: \"kubernetes.io/projected/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-kube-api-access-z5lhx\") pod \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.356536 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-serving-cert\") pod \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.356657 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-client-ca\") pod \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.356735 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-config\") pod \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\" (UID: \"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed\") " Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.357706 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-config" (OuterVolumeSpecName: "config") pod "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" (UID: "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.358934 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" (UID: "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.360538 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf"] Jan 23 07:58:29 crc kubenswrapper[4982]: E0123 07:58:29.360968 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" containerName="route-controller-manager" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.360989 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" containerName="route-controller-manager" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.361154 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" containerName="route-controller-manager" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.361901 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.363712 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-kube-api-access-z5lhx" (OuterVolumeSpecName: "kube-api-access-z5lhx") pod "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" (UID: "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed"). InnerVolumeSpecName "kube-api-access-z5lhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.364583 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" (UID: "9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.373931 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf"] Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.457954 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b66576eb-9007-4a4b-8e7a-c5c28570d825-serving-cert\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.458299 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngfcd\" (UniqueName: \"kubernetes.io/projected/b66576eb-9007-4a4b-8e7a-c5c28570d825-kube-api-access-ngfcd\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.458336 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-client-ca\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.458363 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-config\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.458547 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.458571 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.458583 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5lhx\" (UniqueName: \"kubernetes.io/projected/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-kube-api-access-z5lhx\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.458595 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.559873 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b66576eb-9007-4a4b-8e7a-c5c28570d825-serving-cert\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.559915 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngfcd\" (UniqueName: \"kubernetes.io/projected/b66576eb-9007-4a4b-8e7a-c5c28570d825-kube-api-access-ngfcd\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.559944 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-client-ca\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.559962 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-config\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.560898 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-client-ca\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.568249 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b66576eb-9007-4a4b-8e7a-c5c28570d825-serving-cert\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.569366 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-config\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.576621 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngfcd\" (UniqueName: \"kubernetes.io/projected/b66576eb-9007-4a4b-8e7a-c5c28570d825-kube-api-access-ngfcd\") pod \"route-controller-manager-587dbd655d-9fgpf\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.728798 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.801925 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" event={"ID":"9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed","Type":"ContainerDied","Data":"cd26099ad081e17e920531a4a148db584e6360e9d10b10198c1c457279bdc0c6"} Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.801994 4982 scope.go:117] "RemoveContainer" containerID="42dac80c6d0f9d3631f18f3ec0c80813cc5c9caa5e3ac76b641d3753b06f5415" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.802111 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f866847d-tl728" Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.834807 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f866847d-tl728"] Jan 23 07:58:29 crc kubenswrapper[4982]: I0123 07:58:29.834849 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f866847d-tl728"] Jan 23 07:58:30 crc kubenswrapper[4982]: I0123 07:58:30.687141 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf"] Jan 23 07:58:30 crc kubenswrapper[4982]: W0123 07:58:30.697049 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb66576eb_9007_4a4b_8e7a_c5c28570d825.slice/crio-c8a22ad870e55287fb1874a1e8797d1acbf78b6e3f6ec1949bb7a2883458615d WatchSource:0}: Error finding container c8a22ad870e55287fb1874a1e8797d1acbf78b6e3f6ec1949bb7a2883458615d: Status 404 returned error can't find the container with id c8a22ad870e55287fb1874a1e8797d1acbf78b6e3f6ec1949bb7a2883458615d Jan 23 07:58:30 crc kubenswrapper[4982]: I0123 07:58:30.807741 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79l5k" event={"ID":"df2d9461-97c9-4460-84bf-538f6cd8a5ae","Type":"ContainerStarted","Data":"388e7a703050c17e98728877ed36dbe0172090ff779f5416ca9a29a9d3d3d18f"} Jan 23 07:58:30 crc kubenswrapper[4982]: I0123 07:58:30.812127 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" event={"ID":"b66576eb-9007-4a4b-8e7a-c5c28570d825","Type":"ContainerStarted","Data":"c8a22ad870e55287fb1874a1e8797d1acbf78b6e3f6ec1949bb7a2883458615d"} Jan 23 07:58:30 crc kubenswrapper[4982]: I0123 07:58:30.822441 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:58:30 crc kubenswrapper[4982]: I0123 07:58:30.822662 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:58:30 crc kubenswrapper[4982]: I0123 07:58:30.825913 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-79l5k" podStartSLOduration=2.515381301 podStartE2EDuration="40.825899748s" podCreationTimestamp="2026-01-23 07:57:50 +0000 UTC" firstStartedPulling="2026-01-23 07:57:51.891870472 +0000 UTC m=+146.372233858" lastFinishedPulling="2026-01-23 07:58:30.202388919 +0000 UTC m=+184.682752305" observedRunningTime="2026-01-23 07:58:30.82375228 +0000 UTC m=+185.304115666" watchObservedRunningTime="2026-01-23 07:58:30.825899748 +0000 UTC m=+185.306263124" Jan 23 07:58:31 crc kubenswrapper[4982]: I0123 07:58:31.746472 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgf6m"] Jan 23 07:58:31 crc kubenswrapper[4982]: I0123 07:58:31.835540 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed" path="/var/lib/kubelet/pods/9e2bf09e-84e8-4fbf-aa8f-89628b6dc3ed/volumes" Jan 23 07:58:31 crc kubenswrapper[4982]: I0123 07:58:31.966302 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-79l5k" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="registry-server" probeResult="failure" output=< Jan 23 07:58:31 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 07:58:31 crc kubenswrapper[4982]: > Jan 23 07:58:31 crc kubenswrapper[4982]: I0123 07:58:31.999169 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 07:58:32 crc kubenswrapper[4982]: I0123 07:58:32.824536 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xr2l" event={"ID":"507e0f16-b5ba-4a65-8f55-e0a66506f3e0","Type":"ContainerStarted","Data":"77197efa1d2703c2a6d30e1262e62ac47f476793746d989ce499bdf538d35ff5"} Jan 23 07:58:32 crc kubenswrapper[4982]: I0123 07:58:32.825975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" event={"ID":"b66576eb-9007-4a4b-8e7a-c5c28570d825","Type":"ContainerStarted","Data":"1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce"} Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.488792 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.489448 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.491227 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.491462 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.501277 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.509962 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a37471-68b9-47ae-838a-3169d7441263-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.510022 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a37471-68b9-47ae-838a-3169d7441263-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.611561 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a37471-68b9-47ae-838a-3169d7441263-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.611673 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a37471-68b9-47ae-838a-3169d7441263-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.611735 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a37471-68b9-47ae-838a-3169d7441263-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.629419 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a37471-68b9-47ae-838a-3169d7441263-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.810878 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.846152 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.846264 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.848175 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" podStartSLOduration=8.848155164 podStartE2EDuration="8.848155164s" podCreationTimestamp="2026-01-23 07:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:33.846703865 +0000 UTC m=+188.327067251" watchObservedRunningTime="2026-01-23 07:58:33.848155164 +0000 UTC m=+188.328518560" Jan 23 07:58:33 crc kubenswrapper[4982]: I0123 07:58:33.882387 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xr2l" podStartSLOduration=5.639715256 podStartE2EDuration="45.882365422s" podCreationTimestamp="2026-01-23 07:57:48 +0000 UTC" firstStartedPulling="2026-01-23 07:57:51.872179104 +0000 UTC m=+146.352542480" lastFinishedPulling="2026-01-23 07:58:32.11482926 +0000 UTC m=+186.595192646" observedRunningTime="2026-01-23 07:58:33.871213833 +0000 UTC m=+188.351577249" watchObservedRunningTime="2026-01-23 07:58:33.882365422 +0000 UTC m=+188.362728808" Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.402163 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 07:58:34 crc kubenswrapper[4982]: W0123 07:58:34.409546 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc3a37471_68b9_47ae_838a_3169d7441263.slice/crio-5e5882f3c44d5003f6d8063c21b48cf7d8cf78e7c41da54dfe46cb5eb2b0779c WatchSource:0}: Error finding container 5e5882f3c44d5003f6d8063c21b48cf7d8cf78e7c41da54dfe46cb5eb2b0779c: Status 404 returned error can't find the container with id 5e5882f3c44d5003f6d8063c21b48cf7d8cf78e7c41da54dfe46cb5eb2b0779c Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.838867 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2k6hw" event={"ID":"0dc556e8-7362-4cba-8d17-bf824323503b","Type":"ContainerStarted","Data":"16d5d4e4f5e9edbfc289d7145377ef7c4f5377168734fef6d6f02839f390fcfc"} Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.841169 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55vpx" event={"ID":"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e","Type":"ContainerStarted","Data":"fd06a92141e7e1a9b2f782938a2e0e1bb791c79eaf9bf6217223206b43a9ee4a"} Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.842161 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c3a37471-68b9-47ae-838a-3169d7441263","Type":"ContainerStarted","Data":"5e5882f3c44d5003f6d8063c21b48cf7d8cf78e7c41da54dfe46cb5eb2b0779c"} Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.844365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hljdw" event={"ID":"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc","Type":"ContainerStarted","Data":"93319a044b667ca77e1812fe5dafb131eefdfa01bd19afa9caa35da26c442b49"} Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.847049 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7j6f" event={"ID":"5f5ba8a5-d454-48ef-936b-be7618f02695","Type":"ContainerStarted","Data":"c4881b02c0d38e4dd549f3804ba958cb520d297080d5fe309ee941309cb025ed"} Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.849681 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg2vk" event={"ID":"b6b04d65-1eea-4372-869c-30f41b5afb4c","Type":"ContainerStarted","Data":"2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11"} Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.865030 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55vpx" podStartSLOduration=3.065750318 podStartE2EDuration="43.865013927s" podCreationTimestamp="2026-01-23 07:57:51 +0000 UTC" firstStartedPulling="2026-01-23 07:57:53.156195874 +0000 UTC m=+147.636559260" lastFinishedPulling="2026-01-23 07:58:33.955459483 +0000 UTC m=+188.435822869" observedRunningTime="2026-01-23 07:58:34.86287621 +0000 UTC m=+189.343239596" watchObservedRunningTime="2026-01-23 07:58:34.865013927 +0000 UTC m=+189.345377313" Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.883688 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7j6f" podStartSLOduration=4.718160742 podStartE2EDuration="46.883657897s" podCreationTimestamp="2026-01-23 07:57:48 +0000 UTC" firstStartedPulling="2026-01-23 07:57:51.919694499 +0000 UTC m=+146.400057885" lastFinishedPulling="2026-01-23 07:58:34.085191654 +0000 UTC m=+188.565555040" observedRunningTime="2026-01-23 07:58:34.880733369 +0000 UTC m=+189.361096755" watchObservedRunningTime="2026-01-23 07:58:34.883657897 +0000 UTC m=+189.364021283" Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.926838 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wg2vk" podStartSLOduration=3.283308544 podStartE2EDuration="43.926816365s" podCreationTimestamp="2026-01-23 07:57:51 +0000 UTC" firstStartedPulling="2026-01-23 07:57:53.163766257 +0000 UTC m=+147.644129643" lastFinishedPulling="2026-01-23 07:58:33.807274078 +0000 UTC m=+188.287637464" observedRunningTime="2026-01-23 07:58:34.924711059 +0000 UTC m=+189.405074445" watchObservedRunningTime="2026-01-23 07:58:34.926816365 +0000 UTC m=+189.407179751" Jan 23 07:58:34 crc kubenswrapper[4982]: I0123 07:58:34.928981 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hljdw" podStartSLOduration=4.731456478 podStartE2EDuration="46.928972373s" podCreationTimestamp="2026-01-23 07:57:48 +0000 UTC" firstStartedPulling="2026-01-23 07:57:51.901753197 +0000 UTC m=+146.382116583" lastFinishedPulling="2026-01-23 07:58:34.099269092 +0000 UTC m=+188.579632478" observedRunningTime="2026-01-23 07:58:34.906976323 +0000 UTC m=+189.387339709" watchObservedRunningTime="2026-01-23 07:58:34.928972373 +0000 UTC m=+189.409335759" Jan 23 07:58:35 crc kubenswrapper[4982]: I0123 07:58:35.858117 4982 generic.go:334] "Generic (PLEG): container finished" podID="c3a37471-68b9-47ae-838a-3169d7441263" containerID="2de80d20e149325a3696020d751a576d71caf273aba6e9b29e73d133720bdfcc" exitCode=0 Jan 23 07:58:35 crc kubenswrapper[4982]: I0123 07:58:35.858509 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c3a37471-68b9-47ae-838a-3169d7441263","Type":"ContainerDied","Data":"2de80d20e149325a3696020d751a576d71caf273aba6e9b29e73d133720bdfcc"} Jan 23 07:58:35 crc kubenswrapper[4982]: I0123 07:58:35.909094 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2k6hw" podStartSLOduration=3.5818945060000003 podStartE2EDuration="45.909074889s" podCreationTimestamp="2026-01-23 07:57:50 +0000 UTC" firstStartedPulling="2026-01-23 07:57:51.866356248 +0000 UTC m=+146.346719634" lastFinishedPulling="2026-01-23 07:58:34.193536631 +0000 UTC m=+188.673900017" observedRunningTime="2026-01-23 07:58:35.908198516 +0000 UTC m=+190.388561902" watchObservedRunningTime="2026-01-23 07:58:35.909074889 +0000 UTC m=+190.389438275" Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.171758 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.312312 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a37471-68b9-47ae-838a-3169d7441263-kube-api-access\") pod \"c3a37471-68b9-47ae-838a-3169d7441263\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.312393 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a37471-68b9-47ae-838a-3169d7441263-kubelet-dir\") pod \"c3a37471-68b9-47ae-838a-3169d7441263\" (UID: \"c3a37471-68b9-47ae-838a-3169d7441263\") " Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.312546 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3a37471-68b9-47ae-838a-3169d7441263-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c3a37471-68b9-47ae-838a-3169d7441263" (UID: "c3a37471-68b9-47ae-838a-3169d7441263"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.317883 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a37471-68b9-47ae-838a-3169d7441263-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c3a37471-68b9-47ae-838a-3169d7441263" (UID: "c3a37471-68b9-47ae-838a-3169d7441263"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.413711 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3a37471-68b9-47ae-838a-3169d7441263-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.413773 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3a37471-68b9-47ae-838a-3169d7441263-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.871975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c3a37471-68b9-47ae-838a-3169d7441263","Type":"ContainerDied","Data":"5e5882f3c44d5003f6d8063c21b48cf7d8cf78e7c41da54dfe46cb5eb2b0779c"} Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.872018 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e5882f3c44d5003f6d8063c21b48cf7d8cf78e7c41da54dfe46cb5eb2b0779c" Jan 23 07:58:37 crc kubenswrapper[4982]: I0123 07:58:37.872021 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 07:58:38 crc kubenswrapper[4982]: I0123 07:58:38.710125 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:58:38 crc kubenswrapper[4982]: I0123 07:58:38.710176 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:58:38 crc kubenswrapper[4982]: I0123 07:58:38.753281 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:58:38 crc kubenswrapper[4982]: I0123 07:58:38.969674 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:58:38 crc kubenswrapper[4982]: I0123 07:58:38.969745 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.015364 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.080948 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.080999 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.122645 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.293133 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 07:58:39 crc kubenswrapper[4982]: E0123 07:58:39.293774 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a37471-68b9-47ae-838a-3169d7441263" containerName="pruner" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.293800 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a37471-68b9-47ae-838a-3169d7441263" containerName="pruner" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.294100 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a37471-68b9-47ae-838a-3169d7441263" containerName="pruner" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.294827 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.297977 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.298533 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.299391 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-var-lock\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.299458 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6953a35-140d-4781-a533-4d6d61fd418d-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.299488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.315961 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.400549 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-var-lock\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.400610 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6953a35-140d-4781-a533-4d6d61fd418d-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.400648 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.400723 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.400758 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-var-lock\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.421749 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6953a35-140d-4781-a533-4d6d61fd418d-kube-api-access\") pod \"installer-9-crc\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.619337 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.926135 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:58:39 crc kubenswrapper[4982]: I0123 07:58:39.931159 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:58:40 crc kubenswrapper[4982]: I0123 07:58:40.075893 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 07:58:40 crc kubenswrapper[4982]: W0123 07:58:40.091326 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf6953a35_140d_4781_a533_4d6d61fd418d.slice/crio-f339ff09a9b6295247e2149a245d72f4b9b0c5e4d845ac7562699806d8bce47f WatchSource:0}: Error finding container f339ff09a9b6295247e2149a245d72f4b9b0c5e4d845ac7562699806d8bce47f: Status 404 returned error can't find the container with id f339ff09a9b6295247e2149a245d72f4b9b0c5e4d845ac7562699806d8bce47f Jan 23 07:58:40 crc kubenswrapper[4982]: I0123 07:58:40.528117 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xr2l"] Jan 23 07:58:40 crc kubenswrapper[4982]: I0123 07:58:40.874174 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:58:40 crc kubenswrapper[4982]: I0123 07:58:40.899264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6953a35-140d-4781-a533-4d6d61fd418d","Type":"ContainerStarted","Data":"f339ff09a9b6295247e2149a245d72f4b9b0c5e4d845ac7562699806d8bce47f"} Jan 23 07:58:40 crc kubenswrapper[4982]: I0123 07:58:40.924409 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.208896 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.209178 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.277761 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.436367 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.436478 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.757758 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.758020 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.811875 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.905030 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8xr2l" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="registry-server" containerID="cri-o://77197efa1d2703c2a6d30e1262e62ac47f476793746d989ce499bdf538d35ff5" gracePeriod=2 Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.941901 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:58:41 crc kubenswrapper[4982]: I0123 07:58:41.950143 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:58:42 crc kubenswrapper[4982]: I0123 07:58:42.220569 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:58:42 crc kubenswrapper[4982]: I0123 07:58:42.220645 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:58:42 crc kubenswrapper[4982]: I0123 07:58:42.288913 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:58:42 crc kubenswrapper[4982]: I0123 07:58:42.915054 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6953a35-140d-4781-a533-4d6d61fd418d","Type":"ContainerStarted","Data":"7505c71f4479ceb1818aabb384b3724fb8a1e2cbd72f6ee0093497291f97bc28"} Jan 23 07:58:42 crc kubenswrapper[4982]: I0123 07:58:42.996533 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:58:43 crc kubenswrapper[4982]: I0123 07:58:43.534518 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2k6hw"] Jan 23 07:58:43 crc kubenswrapper[4982]: I0123 07:58:43.924469 4982 generic.go:334] "Generic (PLEG): container finished" podID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerID="77197efa1d2703c2a6d30e1262e62ac47f476793746d989ce499bdf538d35ff5" exitCode=0 Jan 23 07:58:43 crc kubenswrapper[4982]: I0123 07:58:43.924704 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xr2l" event={"ID":"507e0f16-b5ba-4a65-8f55-e0a66506f3e0","Type":"ContainerDied","Data":"77197efa1d2703c2a6d30e1262e62ac47f476793746d989ce499bdf538d35ff5"} Jan 23 07:58:43 crc kubenswrapper[4982]: I0123 07:58:43.966780 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.966745555 podStartE2EDuration="4.966745555s" podCreationTimestamp="2026-01-23 07:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:43.960657611 +0000 UTC m=+198.441021047" watchObservedRunningTime="2026-01-23 07:58:43.966745555 +0000 UTC m=+198.447108981" Jan 23 07:58:44 crc kubenswrapper[4982]: I0123 07:58:44.930076 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2k6hw" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="registry-server" containerID="cri-o://16d5d4e4f5e9edbfc289d7145377ef7c4f5377168734fef6d6f02839f390fcfc" gracePeriod=2 Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.487307 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5464cb48c7-p54m6"] Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.488768 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" podUID="d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" containerName="controller-manager" containerID="cri-o://17cf6e260143ad6250597edf70c79e4076420af3f3888ffa058cb8055616167b" gracePeriod=30 Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.507613 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf"] Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.507873 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" podUID="b66576eb-9007-4a4b-8e7a-c5c28570d825" containerName="route-controller-manager" containerID="cri-o://1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce" gracePeriod=30 Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.736906 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.832166 4982 patch_prober.go:28] interesting pod/controller-manager-5464cb48c7-p54m6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.832218 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" podUID="d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.897829 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-catalog-content\") pod \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.897902 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw5ht\" (UniqueName: \"kubernetes.io/projected/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-kube-api-access-cw5ht\") pod \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.897952 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-utilities\") pod \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\" (UID: \"507e0f16-b5ba-4a65-8f55-e0a66506f3e0\") " Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.898989 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-utilities" (OuterVolumeSpecName: "utilities") pod "507e0f16-b5ba-4a65-8f55-e0a66506f3e0" (UID: "507e0f16-b5ba-4a65-8f55-e0a66506f3e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.904706 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-kube-api-access-cw5ht" (OuterVolumeSpecName: "kube-api-access-cw5ht") pod "507e0f16-b5ba-4a65-8f55-e0a66506f3e0" (UID: "507e0f16-b5ba-4a65-8f55-e0a66506f3e0"). InnerVolumeSpecName "kube-api-access-cw5ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.935698 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg2vk"] Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.936046 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wg2vk" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="registry-server" containerID="cri-o://2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11" gracePeriod=2 Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.944131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xr2l" event={"ID":"507e0f16-b5ba-4a65-8f55-e0a66506f3e0","Type":"ContainerDied","Data":"3223a861fe76c3c7e48f427b77ca7b4524cff11914ad5a961c01b5fc660f6894"} Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.944221 4982 scope.go:117] "RemoveContainer" containerID="77197efa1d2703c2a6d30e1262e62ac47f476793746d989ce499bdf538d35ff5" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.944223 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xr2l" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.965331 4982 scope.go:117] "RemoveContainer" containerID="ff66d1b877d470648e2845b4e51485351dbbf3009a5af77438c34bd9f7504956" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.976975 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "507e0f16-b5ba-4a65-8f55-e0a66506f3e0" (UID: "507e0f16-b5ba-4a65-8f55-e0a66506f3e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.986290 4982 scope.go:117] "RemoveContainer" containerID="4f66ff2d217c61259651997d9f3b715fe21b5700cf4f6225ce40d4c96ef6f11d" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.999527 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.999557 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:45 crc kubenswrapper[4982]: I0123 07:58:45.999569 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw5ht\" (UniqueName: \"kubernetes.io/projected/507e0f16-b5ba-4a65-8f55-e0a66506f3e0-kube-api-access-cw5ht\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:46 crc kubenswrapper[4982]: I0123 07:58:46.297260 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xr2l"] Jan 23 07:58:46 crc kubenswrapper[4982]: I0123 07:58:46.304570 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8xr2l"] Jan 23 07:58:46 crc kubenswrapper[4982]: I0123 07:58:46.952720 4982 generic.go:334] "Generic (PLEG): container finished" podID="0dc556e8-7362-4cba-8d17-bf824323503b" containerID="16d5d4e4f5e9edbfc289d7145377ef7c4f5377168734fef6d6f02839f390fcfc" exitCode=0 Jan 23 07:58:46 crc kubenswrapper[4982]: I0123 07:58:46.952795 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2k6hw" event={"ID":"0dc556e8-7362-4cba-8d17-bf824323503b","Type":"ContainerDied","Data":"16d5d4e4f5e9edbfc289d7145377ef7c4f5377168734fef6d6f02839f390fcfc"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.437724 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.623852 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-utilities\") pod \"0dc556e8-7362-4cba-8d17-bf824323503b\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.623989 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfdqk\" (UniqueName: \"kubernetes.io/projected/0dc556e8-7362-4cba-8d17-bf824323503b-kube-api-access-mfdqk\") pod \"0dc556e8-7362-4cba-8d17-bf824323503b\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.624023 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-catalog-content\") pod \"0dc556e8-7362-4cba-8d17-bf824323503b\" (UID: \"0dc556e8-7362-4cba-8d17-bf824323503b\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.625166 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-utilities" (OuterVolumeSpecName: "utilities") pod "0dc556e8-7362-4cba-8d17-bf824323503b" (UID: "0dc556e8-7362-4cba-8d17-bf824323503b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.644545 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc556e8-7362-4cba-8d17-bf824323503b-kube-api-access-mfdqk" (OuterVolumeSpecName: "kube-api-access-mfdqk") pod "0dc556e8-7362-4cba-8d17-bf824323503b" (UID: "0dc556e8-7362-4cba-8d17-bf824323503b"). InnerVolumeSpecName "kube-api-access-mfdqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.666928 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dc556e8-7362-4cba-8d17-bf824323503b" (UID: "0dc556e8-7362-4cba-8d17-bf824323503b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.667272 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.725615 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.725683 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfdqk\" (UniqueName: \"kubernetes.io/projected/0dc556e8-7362-4cba-8d17-bf824323503b-kube-api-access-mfdqk\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.725697 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc556e8-7362-4cba-8d17-bf824323503b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.820558 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.826309 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stzww\" (UniqueName: \"kubernetes.io/projected/b6b04d65-1eea-4372-869c-30f41b5afb4c-kube-api-access-stzww\") pod \"b6b04d65-1eea-4372-869c-30f41b5afb4c\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.826531 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-catalog-content\") pod \"b6b04d65-1eea-4372-869c-30f41b5afb4c\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.826658 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-utilities\") pod \"b6b04d65-1eea-4372-869c-30f41b5afb4c\" (UID: \"b6b04d65-1eea-4372-869c-30f41b5afb4c\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.828052 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-utilities" (OuterVolumeSpecName: "utilities") pod "b6b04d65-1eea-4372-869c-30f41b5afb4c" (UID: "b6b04d65-1eea-4372-869c-30f41b5afb4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.828916 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b04d65-1eea-4372-869c-30f41b5afb4c-kube-api-access-stzww" (OuterVolumeSpecName: "kube-api-access-stzww") pod "b6b04d65-1eea-4372-869c-30f41b5afb4c" (UID: "b6b04d65-1eea-4372-869c-30f41b5afb4c"). InnerVolumeSpecName "kube-api-access-stzww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.855142 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" path="/var/lib/kubelet/pods/507e0f16-b5ba-4a65-8f55-e0a66506f3e0/volumes" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.928078 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-config\") pod \"b66576eb-9007-4a4b-8e7a-c5c28570d825\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.928438 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngfcd\" (UniqueName: \"kubernetes.io/projected/b66576eb-9007-4a4b-8e7a-c5c28570d825-kube-api-access-ngfcd\") pod \"b66576eb-9007-4a4b-8e7a-c5c28570d825\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.928535 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-client-ca\") pod \"b66576eb-9007-4a4b-8e7a-c5c28570d825\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.928572 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b66576eb-9007-4a4b-8e7a-c5c28570d825-serving-cert\") pod \"b66576eb-9007-4a4b-8e7a-c5c28570d825\" (UID: \"b66576eb-9007-4a4b-8e7a-c5c28570d825\") " Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.928813 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stzww\" (UniqueName: \"kubernetes.io/projected/b6b04d65-1eea-4372-869c-30f41b5afb4c-kube-api-access-stzww\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.928830 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.929471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-client-ca" (OuterVolumeSpecName: "client-ca") pod "b66576eb-9007-4a4b-8e7a-c5c28570d825" (UID: "b66576eb-9007-4a4b-8e7a-c5c28570d825"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.929509 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-config" (OuterVolumeSpecName: "config") pod "b66576eb-9007-4a4b-8e7a-c5c28570d825" (UID: "b66576eb-9007-4a4b-8e7a-c5c28570d825"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.931420 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66576eb-9007-4a4b-8e7a-c5c28570d825-kube-api-access-ngfcd" (OuterVolumeSpecName: "kube-api-access-ngfcd") pod "b66576eb-9007-4a4b-8e7a-c5c28570d825" (UID: "b66576eb-9007-4a4b-8e7a-c5c28570d825"). InnerVolumeSpecName "kube-api-access-ngfcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.931424 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b66576eb-9007-4a4b-8e7a-c5c28570d825-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b66576eb-9007-4a4b-8e7a-c5c28570d825" (UID: "b66576eb-9007-4a4b-8e7a-c5c28570d825"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.952817 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6b04d65-1eea-4372-869c-30f41b5afb4c" (UID: "b6b04d65-1eea-4372-869c-30f41b5afb4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.959778 4982 generic.go:334] "Generic (PLEG): container finished" podID="d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" containerID="17cf6e260143ad6250597edf70c79e4076420af3f3888ffa058cb8055616167b" exitCode=0 Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.959854 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" event={"ID":"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b","Type":"ContainerDied","Data":"17cf6e260143ad6250597edf70c79e4076420af3f3888ffa058cb8055616167b"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.959891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" event={"ID":"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b","Type":"ContainerDied","Data":"b511ff3ca2587ede40c3d2c70cb7caaaf755c2f27fd0923699b0eb58752dfc9f"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.959905 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b511ff3ca2587ede40c3d2c70cb7caaaf755c2f27fd0923699b0eb58752dfc9f" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.960874 4982 generic.go:334] "Generic (PLEG): container finished" podID="b66576eb-9007-4a4b-8e7a-c5c28570d825" containerID="1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce" exitCode=0 Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.960906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" event={"ID":"b66576eb-9007-4a4b-8e7a-c5c28570d825","Type":"ContainerDied","Data":"1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.960921 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" event={"ID":"b66576eb-9007-4a4b-8e7a-c5c28570d825","Type":"ContainerDied","Data":"c8a22ad870e55287fb1874a1e8797d1acbf78b6e3f6ec1949bb7a2883458615d"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.960937 4982 scope.go:117] "RemoveContainer" containerID="1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.961040 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.971551 4982 generic.go:334] "Generic (PLEG): container finished" podID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerID="2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11" exitCode=0 Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.971671 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg2vk" event={"ID":"b6b04d65-1eea-4372-869c-30f41b5afb4c","Type":"ContainerDied","Data":"2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.971710 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wg2vk" event={"ID":"b6b04d65-1eea-4372-869c-30f41b5afb4c","Type":"ContainerDied","Data":"69992218e9f32712fd7c6e9edc9705a0b86b2184a290b384549e70ad5fb1cc9b"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.971810 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wg2vk" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.974837 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.975271 4982 generic.go:334] "Generic (PLEG): container finished" podID="19faede0-5207-4753-846b-4e4975ebc03a" containerID="dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67" exitCode=0 Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.975341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pqmw" event={"ID":"19faede0-5207-4753-846b-4e4975ebc03a","Type":"ContainerDied","Data":"dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.978173 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2k6hw" event={"ID":"0dc556e8-7362-4cba-8d17-bf824323503b","Type":"ContainerDied","Data":"4649c62a1e7cb603d5fa84a04a1dd6df0fc197dca80913441da50e1571b7e6cd"} Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.978255 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2k6hw" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.996575 4982 scope.go:117] "RemoveContainer" containerID="1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce" Jan 23 07:58:47 crc kubenswrapper[4982]: E0123 07:58:47.997046 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce\": container with ID starting with 1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce not found: ID does not exist" containerID="1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.997080 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce"} err="failed to get container status \"1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce\": rpc error: code = NotFound desc = could not find container \"1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce\": container with ID starting with 1369c3a28b8fcd47a57ef1f1f62b93d1fa24ebb4ed2815dd0aa107dc43a525ce not found: ID does not exist" Jan 23 07:58:47 crc kubenswrapper[4982]: I0123 07:58:47.997130 4982 scope.go:117] "RemoveContainer" containerID="2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.009293 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf"] Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.014301 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-587dbd655d-9fgpf"] Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.022854 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wg2vk"] Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.023329 4982 scope.go:117] "RemoveContainer" containerID="ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.026120 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wg2vk"] Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.031233 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.031258 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b66576eb-9007-4a4b-8e7a-c5c28570d825-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.031270 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b04d65-1eea-4372-869c-30f41b5afb4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.031340 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b66576eb-9007-4a4b-8e7a-c5c28570d825-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.031351 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngfcd\" (UniqueName: \"kubernetes.io/projected/b66576eb-9007-4a4b-8e7a-c5c28570d825-kube-api-access-ngfcd\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.037930 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2k6hw"] Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.040882 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2k6hw"] Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.044852 4982 scope.go:117] "RemoveContainer" containerID="08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.059231 4982 scope.go:117] "RemoveContainer" containerID="2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11" Jan 23 07:58:48 crc kubenswrapper[4982]: E0123 07:58:48.059681 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11\": container with ID starting with 2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11 not found: ID does not exist" containerID="2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.059727 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11"} err="failed to get container status \"2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11\": rpc error: code = NotFound desc = could not find container \"2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11\": container with ID starting with 2398dbbf99a532cb278432eec4a0223add9055ef6e56e31555ad221a66939a11 not found: ID does not exist" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.059760 4982 scope.go:117] "RemoveContainer" containerID="ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52" Jan 23 07:58:48 crc kubenswrapper[4982]: E0123 07:58:48.059977 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52\": container with ID starting with ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52 not found: ID does not exist" containerID="ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.059998 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52"} err="failed to get container status \"ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52\": rpc error: code = NotFound desc = could not find container \"ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52\": container with ID starting with ae67c7e00057beaaca667d98cfb26042ff1374a1720e8166c3746e29a40c1f52 not found: ID does not exist" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.060011 4982 scope.go:117] "RemoveContainer" containerID="08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421" Jan 23 07:58:48 crc kubenswrapper[4982]: E0123 07:58:48.060213 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421\": container with ID starting with 08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421 not found: ID does not exist" containerID="08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.060247 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421"} err="failed to get container status \"08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421\": rpc error: code = NotFound desc = could not find container \"08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421\": container with ID starting with 08265c28b8ded1beedfa6e37156a18a0b00be74184b1bd7a6f05fc067a549421 not found: ID does not exist" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.060302 4982 scope.go:117] "RemoveContainer" containerID="16d5d4e4f5e9edbfc289d7145377ef7c4f5377168734fef6d6f02839f390fcfc" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.072389 4982 scope.go:117] "RemoveContainer" containerID="8e580b9bf3db19d4926d3108e5a55ca8881e4b8755cbd7b6d5657a69e14e2e2e" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.083527 4982 scope.go:117] "RemoveContainer" containerID="e47cd81cb1a62a6a099b44fefcbbcf1d2942058087a881ac6e234e037e31986e" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.132980 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4jjp\" (UniqueName: \"kubernetes.io/projected/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-kube-api-access-l4jjp\") pod \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.133110 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-serving-cert\") pod \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.133187 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-proxy-ca-bundles\") pod \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.133245 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-client-ca\") pod \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.133277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-config\") pod \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\" (UID: \"d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b\") " Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.134216 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" (UID: "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.134201 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" (UID: "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.134276 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-config" (OuterVolumeSpecName: "config") pod "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" (UID: "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.136961 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-kube-api-access-l4jjp" (OuterVolumeSpecName: "kube-api-access-l4jjp") pod "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" (UID: "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b"). InnerVolumeSpecName "kube-api-access-l4jjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.137403 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" (UID: "d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.235489 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4jjp\" (UniqueName: \"kubernetes.io/projected/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-kube-api-access-l4jjp\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.235523 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.235533 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.235541 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.235550 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.766712 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.986505 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pqmw" event={"ID":"19faede0-5207-4753-846b-4e4975ebc03a","Type":"ContainerStarted","Data":"3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458"} Jan 23 07:58:48 crc kubenswrapper[4982]: I0123 07:58:48.988486 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5464cb48c7-p54m6" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.006057 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9pqmw" podStartSLOduration=4.395556723 podStartE2EDuration="1m1.006036442s" podCreationTimestamp="2026-01-23 07:57:48 +0000 UTC" firstStartedPulling="2026-01-23 07:57:51.964038049 +0000 UTC m=+146.444401425" lastFinishedPulling="2026-01-23 07:58:48.574517758 +0000 UTC m=+203.054881144" observedRunningTime="2026-01-23 07:58:49.0044699 +0000 UTC m=+203.484833296" watchObservedRunningTime="2026-01-23 07:58:49.006036442 +0000 UTC m=+203.486399828" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.019311 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5464cb48c7-p54m6"] Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.023416 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5464cb48c7-p54m6"] Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057062 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f67648c7-lmn2x"] Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057543 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="extract-utilities" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057567 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="extract-utilities" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057583 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66576eb-9007-4a4b-8e7a-c5c28570d825" containerName="route-controller-manager" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057594 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66576eb-9007-4a4b-8e7a-c5c28570d825" containerName="route-controller-manager" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057608 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057616 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057657 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057665 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057677 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="extract-content" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057685 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="extract-content" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057695 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057703 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057715 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="extract-utilities" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057722 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="extract-utilities" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057736 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="extract-content" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057744 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="extract-content" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057756 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="extract-content" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057764 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="extract-content" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057774 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" containerName="controller-manager" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057782 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" containerName="controller-manager" Jan 23 07:58:49 crc kubenswrapper[4982]: E0123 07:58:49.057793 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="extract-utilities" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057800 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="extract-utilities" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057913 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057927 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057938 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="507e0f16-b5ba-4a65-8f55-e0a66506f3e0" containerName="registry-server" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057948 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66576eb-9007-4a4b-8e7a-c5c28570d825" containerName="route-controller-manager" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.057962 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" containerName="controller-manager" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.058409 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.061928 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn"] Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.062908 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.068130 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.068519 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.068979 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.069126 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.069241 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.069287 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.069800 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.070268 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.087478 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.091574 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.093091 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.093438 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.093793 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.097423 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f67648c7-lmn2x"] Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.102603 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn"] Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248204 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-config\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248257 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-proxy-ca-bundles\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248288 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-client-ca\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248310 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkc2p\" (UniqueName: \"kubernetes.io/projected/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-kube-api-access-zkc2p\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248523 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f697a3-093c-421b-a727-c452fcc95745-serving-cert\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248586 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-config\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248729 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-serving-cert\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248815 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-client-ca\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.248899 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz5gc\" (UniqueName: \"kubernetes.io/projected/11f697a3-093c-421b-a727-c452fcc95745-kube-api-access-dz5gc\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.313864 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.313950 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350159 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-config\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-proxy-ca-bundles\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350267 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-client-ca\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350297 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkc2p\" (UniqueName: \"kubernetes.io/projected/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-kube-api-access-zkc2p\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350327 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f697a3-093c-421b-a727-c452fcc95745-serving-cert\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350348 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-config\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350380 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-serving-cert\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350404 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-client-ca\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.350438 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz5gc\" (UniqueName: \"kubernetes.io/projected/11f697a3-093c-421b-a727-c452fcc95745-kube-api-access-dz5gc\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.352491 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-proxy-ca-bundles\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.353821 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-config\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.354158 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-config\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.354539 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-client-ca\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.355274 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-client-ca\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.359672 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-serving-cert\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.370347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f697a3-093c-421b-a727-c452fcc95745-serving-cert\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.384352 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkc2p\" (UniqueName: \"kubernetes.io/projected/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-kube-api-access-zkc2p\") pod \"route-controller-manager-5b5994dfc9-g8wkn\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.384663 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz5gc\" (UniqueName: \"kubernetes.io/projected/11f697a3-093c-421b-a727-c452fcc95745-kube-api-access-dz5gc\") pod \"controller-manager-6f67648c7-lmn2x\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.411351 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.419724 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.672474 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f67648c7-lmn2x"] Jan 23 07:58:49 crc kubenswrapper[4982]: W0123 07:58:49.679065 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11f697a3_093c_421b_a727_c452fcc95745.slice/crio-b6ed24a42fa24e99573eab1cc79cb6c7ef47df2a916800d5fa6f5dabf191fb77 WatchSource:0}: Error finding container b6ed24a42fa24e99573eab1cc79cb6c7ef47df2a916800d5fa6f5dabf191fb77: Status 404 returned error can't find the container with id b6ed24a42fa24e99573eab1cc79cb6c7ef47df2a916800d5fa6f5dabf191fb77 Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.720198 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn"] Jan 23 07:58:49 crc kubenswrapper[4982]: W0123 07:58:49.723693 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ff525c5_fa0e_4b23_8f5f_b476ba47a554.slice/crio-483f8afe93758d1e15a1a00c1d862754f348a3eddc4e26aeb830002ff9b2ea82 WatchSource:0}: Error finding container 483f8afe93758d1e15a1a00c1d862754f348a3eddc4e26aeb830002ff9b2ea82: Status 404 returned error can't find the container with id 483f8afe93758d1e15a1a00c1d862754f348a3eddc4e26aeb830002ff9b2ea82 Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.834684 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc556e8-7362-4cba-8d17-bf824323503b" path="/var/lib/kubelet/pods/0dc556e8-7362-4cba-8d17-bf824323503b/volumes" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.835486 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66576eb-9007-4a4b-8e7a-c5c28570d825" path="/var/lib/kubelet/pods/b66576eb-9007-4a4b-8e7a-c5c28570d825/volumes" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.835996 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b04d65-1eea-4372-869c-30f41b5afb4c" path="/var/lib/kubelet/pods/b6b04d65-1eea-4372-869c-30f41b5afb4c/volumes" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.837126 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b" path="/var/lib/kubelet/pods/d2fa6ae5-3d73-4889-be9e-eaf022ddaf1b/volumes" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.994898 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" event={"ID":"11f697a3-093c-421b-a727-c452fcc95745","Type":"ContainerStarted","Data":"8901ba56a87bcdbabf554c3c8195873989506d07fa442b7bf7f34e59e7fd6839"} Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.994959 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" event={"ID":"11f697a3-093c-421b-a727-c452fcc95745","Type":"ContainerStarted","Data":"b6ed24a42fa24e99573eab1cc79cb6c7ef47df2a916800d5fa6f5dabf191fb77"} Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.996195 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.999510 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" event={"ID":"8ff525c5-fa0e-4b23-8f5f-b476ba47a554","Type":"ContainerStarted","Data":"c068b1415e17e060c982cf63f1baa9a6b6d571a35c7470beea66160e553629b4"} Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.999537 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" event={"ID":"8ff525c5-fa0e-4b23-8f5f-b476ba47a554","Type":"ContainerStarted","Data":"483f8afe93758d1e15a1a00c1d862754f348a3eddc4e26aeb830002ff9b2ea82"} Jan 23 07:58:49 crc kubenswrapper[4982]: I0123 07:58:49.999552 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:50 crc kubenswrapper[4982]: I0123 07:58:50.002100 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:58:50 crc kubenswrapper[4982]: I0123 07:58:50.032943 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" podStartSLOduration=5.032925773 podStartE2EDuration="5.032925773s" podCreationTimestamp="2026-01-23 07:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:50.029506791 +0000 UTC m=+204.509870187" watchObservedRunningTime="2026-01-23 07:58:50.032925773 +0000 UTC m=+204.513289169" Jan 23 07:58:50 crc kubenswrapper[4982]: I0123 07:58:50.094610 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" podStartSLOduration=5.094585814 podStartE2EDuration="5.094585814s" podCreationTimestamp="2026-01-23 07:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:58:50.087943375 +0000 UTC m=+204.568306771" watchObservedRunningTime="2026-01-23 07:58:50.094585814 +0000 UTC m=+204.574949200" Jan 23 07:58:50 crc kubenswrapper[4982]: I0123 07:58:50.367303 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9pqmw" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="registry-server" probeResult="failure" output=< Jan 23 07:58:50 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 07:58:50 crc kubenswrapper[4982]: > Jan 23 07:58:50 crc kubenswrapper[4982]: I0123 07:58:50.489470 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:58:56 crc kubenswrapper[4982]: I0123 07:58:56.789472 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" podUID="458b850a-fd8f-4b10-b504-76f0db87d7ad" containerName="oauth-openshift" containerID="cri-o://54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd" gracePeriod=15 Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.836485 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.993810 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-idp-0-file-data\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.993902 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-provider-selection\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.993952 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-error\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.993982 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-session\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994012 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-dir\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994038 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-policies\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994067 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-trusted-ca-bundle\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994098 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6pb6\" (UniqueName: \"kubernetes.io/projected/458b850a-fd8f-4b10-b504-76f0db87d7ad-kube-api-access-m6pb6\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994127 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994147 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-serving-cert\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994267 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-router-certs\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994298 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-ocp-branding-template\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994332 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-cliconfig\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994366 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-service-ca\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994390 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-login\") pod \"458b850a-fd8f-4b10-b504-76f0db87d7ad\" (UID: \"458b850a-fd8f-4b10-b504-76f0db87d7ad\") " Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.994840 4982 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.996340 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.996799 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.996901 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:57 crc kubenswrapper[4982]: I0123 07:58:57.998398 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.002677 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.009867 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.009974 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458b850a-fd8f-4b10-b504-76f0db87d7ad-kube-api-access-m6pb6" (OuterVolumeSpecName: "kube-api-access-m6pb6") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "kube-api-access-m6pb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.010112 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.010519 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.011074 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.012096 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.012220 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.012409 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "458b850a-fd8f-4b10-b504-76f0db87d7ad" (UID: "458b850a-fd8f-4b10-b504-76f0db87d7ad"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.043447 4982 generic.go:334] "Generic (PLEG): container finished" podID="458b850a-fd8f-4b10-b504-76f0db87d7ad" containerID="54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd" exitCode=0 Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.043734 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" event={"ID":"458b850a-fd8f-4b10-b504-76f0db87d7ad","Type":"ContainerDied","Data":"54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd"} Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.043828 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" event={"ID":"458b850a-fd8f-4b10-b504-76f0db87d7ad","Type":"ContainerDied","Data":"b2094d2d78ff47dda3f51fd8b2ca5032e4b434ca38d1e3abf5ecc8436d402e44"} Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.043968 4982 scope.go:117] "RemoveContainer" containerID="54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.044116 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vgf6m" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.061779 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-ksmpj"] Jan 23 07:58:58 crc kubenswrapper[4982]: E0123 07:58:58.062139 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458b850a-fd8f-4b10-b504-76f0db87d7ad" containerName="oauth-openshift" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.062183 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="458b850a-fd8f-4b10-b504-76f0db87d7ad" containerName="oauth-openshift" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.062330 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="458b850a-fd8f-4b10-b504-76f0db87d7ad" containerName="oauth-openshift" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.062974 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.066773 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.067307 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.067641 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.067325 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.068263 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.069838 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.071208 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.071341 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.071406 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.071871 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.073184 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.076454 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.077827 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-ksmpj"] Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.081757 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.088763 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.089161 4982 scope.go:117] "RemoveContainer" containerID="54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd" Jan 23 07:58:58 crc kubenswrapper[4982]: E0123 07:58:58.091264 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd\": container with ID starting with 54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd not found: ID does not exist" containerID="54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.091377 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd"} err="failed to get container status \"54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd\": rpc error: code = NotFound desc = could not find container \"54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd\": container with ID starting with 54262e7fff6c6f3649b2343aacaf10064e7978a1e3fda76f2fca7a1568276abd not found: ID does not exist" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.091457 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.095697 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.099494 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.099653 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.099904 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100011 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100568 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100597 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100613 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100638 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100649 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100660 4982 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100670 4982 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/458b850a-fd8f-4b10-b504-76f0db87d7ad-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.100681 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6pb6\" (UniqueName: \"kubernetes.io/projected/458b850a-fd8f-4b10-b504-76f0db87d7ad-kube-api-access-m6pb6\") on node \"crc\" DevicePath \"\"" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.115322 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgf6m"] Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.119582 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vgf6m"] Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202185 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202217 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-audit-policies\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202263 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-audit-dir\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202342 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202663 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202776 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202846 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkqxg\" (UniqueName: \"kubernetes.io/projected/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-kube-api-access-kkqxg\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.202966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.203130 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.203219 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.203298 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306128 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-audit-policies\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306291 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-audit-dir\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306395 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306492 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306565 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306605 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-audit-dir\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306690 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306746 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306808 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkqxg\" (UniqueName: \"kubernetes.io/projected/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-kube-api-access-kkqxg\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306862 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306921 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.306977 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.307031 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.307137 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.307216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.307402 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-audit-policies\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.308405 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.309502 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-service-ca\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.311297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-login\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.311908 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.311903 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.312018 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-session\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.312387 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-router-certs\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.313879 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.313890 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.316230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.316666 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-v4-0-config-user-template-error\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.340343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkqxg\" (UniqueName: \"kubernetes.io/projected/6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64-kube-api-access-kkqxg\") pod \"oauth-openshift-fdd74686d-ksmpj\" (UID: \"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64\") " pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.420396 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:58:58 crc kubenswrapper[4982]: I0123 07:58:58.928306 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-fdd74686d-ksmpj"] Jan 23 07:58:58 crc kubenswrapper[4982]: W0123 07:58:58.937914 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f3c4c50_d9c7_4b37_8e21_5bc74fe2ed64.slice/crio-a858703f4fd10bb5711124c1eaef70a7b69f0809feba4aac70c0b141d85d8a7b WatchSource:0}: Error finding container a858703f4fd10bb5711124c1eaef70a7b69f0809feba4aac70c0b141d85d8a7b: Status 404 returned error can't find the container with id a858703f4fd10bb5711124c1eaef70a7b69f0809feba4aac70c0b141d85d8a7b Jan 23 07:58:59 crc kubenswrapper[4982]: I0123 07:58:59.055317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" event={"ID":"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64","Type":"ContainerStarted","Data":"a858703f4fd10bb5711124c1eaef70a7b69f0809feba4aac70c0b141d85d8a7b"} Jan 23 07:58:59 crc kubenswrapper[4982]: I0123 07:58:59.359909 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:58:59 crc kubenswrapper[4982]: I0123 07:58:59.424286 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:58:59 crc kubenswrapper[4982]: I0123 07:58:59.601690 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pqmw"] Jan 23 07:58:59 crc kubenswrapper[4982]: I0123 07:58:59.841200 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458b850a-fd8f-4b10-b504-76f0db87d7ad" path="/var/lib/kubelet/pods/458b850a-fd8f-4b10-b504-76f0db87d7ad/volumes" Jan 23 07:59:00 crc kubenswrapper[4982]: I0123 07:59:00.066374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" event={"ID":"6f3c4c50-d9c7-4b37-8e21-5bc74fe2ed64","Type":"ContainerStarted","Data":"ba2229c6da2e4648faf2008e798c9b9d99252feda55deaec6f49bac95cd7654a"} Jan 23 07:59:00 crc kubenswrapper[4982]: I0123 07:59:00.116777 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" podStartSLOduration=29.116744908 podStartE2EDuration="29.116744908s" podCreationTimestamp="2026-01-23 07:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:59:00.10457726 +0000 UTC m=+214.584940676" watchObservedRunningTime="2026-01-23 07:59:00.116744908 +0000 UTC m=+214.597108304" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.072159 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9pqmw" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="registry-server" containerID="cri-o://3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458" gracePeriod=2 Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.072545 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.079238 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-fdd74686d-ksmpj" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.654321 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.762329 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2vjw\" (UniqueName: \"kubernetes.io/projected/19faede0-5207-4753-846b-4e4975ebc03a-kube-api-access-g2vjw\") pod \"19faede0-5207-4753-846b-4e4975ebc03a\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.762395 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-catalog-content\") pod \"19faede0-5207-4753-846b-4e4975ebc03a\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.762424 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-utilities\") pod \"19faede0-5207-4753-846b-4e4975ebc03a\" (UID: \"19faede0-5207-4753-846b-4e4975ebc03a\") " Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.763346 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-utilities" (OuterVolumeSpecName: "utilities") pod "19faede0-5207-4753-846b-4e4975ebc03a" (UID: "19faede0-5207-4753-846b-4e4975ebc03a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.769758 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19faede0-5207-4753-846b-4e4975ebc03a-kube-api-access-g2vjw" (OuterVolumeSpecName: "kube-api-access-g2vjw") pod "19faede0-5207-4753-846b-4e4975ebc03a" (UID: "19faede0-5207-4753-846b-4e4975ebc03a"). InnerVolumeSpecName "kube-api-access-g2vjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.821432 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19faede0-5207-4753-846b-4e4975ebc03a" (UID: "19faede0-5207-4753-846b-4e4975ebc03a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.865530 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2vjw\" (UniqueName: \"kubernetes.io/projected/19faede0-5207-4753-846b-4e4975ebc03a-kube-api-access-g2vjw\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.865581 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:01 crc kubenswrapper[4982]: I0123 07:59:01.865594 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19faede0-5207-4753-846b-4e4975ebc03a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.080841 4982 generic.go:334] "Generic (PLEG): container finished" podID="19faede0-5207-4753-846b-4e4975ebc03a" containerID="3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458" exitCode=0 Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.080956 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pqmw" event={"ID":"19faede0-5207-4753-846b-4e4975ebc03a","Type":"ContainerDied","Data":"3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458"} Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.081230 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9pqmw" event={"ID":"19faede0-5207-4753-846b-4e4975ebc03a","Type":"ContainerDied","Data":"e8932e5bf3aeceeec9a2ac499ff638e9e26c47cfddb3c89e826dbe582dfdf657"} Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.081268 4982 scope.go:117] "RemoveContainer" containerID="3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.080973 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9pqmw" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.098153 4982 scope.go:117] "RemoveContainer" containerID="dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.103945 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9pqmw"] Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.106356 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9pqmw"] Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.131022 4982 scope.go:117] "RemoveContainer" containerID="00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.148133 4982 scope.go:117] "RemoveContainer" containerID="3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458" Jan 23 07:59:02 crc kubenswrapper[4982]: E0123 07:59:02.148741 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458\": container with ID starting with 3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458 not found: ID does not exist" containerID="3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.148792 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458"} err="failed to get container status \"3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458\": rpc error: code = NotFound desc = could not find container \"3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458\": container with ID starting with 3ae2f5e98ee984830662c68c7de1d2346a9522f872552f03a6826c319c08f458 not found: ID does not exist" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.148831 4982 scope.go:117] "RemoveContainer" containerID="dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67" Jan 23 07:59:02 crc kubenswrapper[4982]: E0123 07:59:02.149419 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67\": container with ID starting with dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67 not found: ID does not exist" containerID="dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.149451 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67"} err="failed to get container status \"dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67\": rpc error: code = NotFound desc = could not find container \"dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67\": container with ID starting with dd59704d58dbb814397ff559fb3fd6fc5773d5302708cfe5676e34968f54ed67 not found: ID does not exist" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.149477 4982 scope.go:117] "RemoveContainer" containerID="00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d" Jan 23 07:59:02 crc kubenswrapper[4982]: E0123 07:59:02.149825 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d\": container with ID starting with 00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d not found: ID does not exist" containerID="00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d" Jan 23 07:59:02 crc kubenswrapper[4982]: I0123 07:59:02.149860 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d"} err="failed to get container status \"00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d\": rpc error: code = NotFound desc = could not find container \"00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d\": container with ID starting with 00fc97794653a44f6fc12e1b0c88b348fec41ec2d3051f893c5eb1068027953d not found: ID does not exist" Jan 23 07:59:03 crc kubenswrapper[4982]: I0123 07:59:03.841508 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19faede0-5207-4753-846b-4e4975ebc03a" path="/var/lib/kubelet/pods/19faede0-5207-4753-846b-4e4975ebc03a/volumes" Jan 23 07:59:05 crc kubenswrapper[4982]: I0123 07:59:05.523204 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f67648c7-lmn2x"] Jan 23 07:59:05 crc kubenswrapper[4982]: I0123 07:59:05.523662 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" podUID="11f697a3-093c-421b-a727-c452fcc95745" containerName="controller-manager" containerID="cri-o://8901ba56a87bcdbabf554c3c8195873989506d07fa442b7bf7f34e59e7fd6839" gracePeriod=30 Jan 23 07:59:05 crc kubenswrapper[4982]: I0123 07:59:05.651156 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn"] Jan 23 07:59:05 crc kubenswrapper[4982]: I0123 07:59:05.651375 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" podUID="8ff525c5-fa0e-4b23-8f5f-b476ba47a554" containerName="route-controller-manager" containerID="cri-o://c068b1415e17e060c982cf63f1baa9a6b6d571a35c7470beea66160e553629b4" gracePeriod=30 Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.108031 4982 generic.go:334] "Generic (PLEG): container finished" podID="11f697a3-093c-421b-a727-c452fcc95745" containerID="8901ba56a87bcdbabf554c3c8195873989506d07fa442b7bf7f34e59e7fd6839" exitCode=0 Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.108127 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" event={"ID":"11f697a3-093c-421b-a727-c452fcc95745","Type":"ContainerDied","Data":"8901ba56a87bcdbabf554c3c8195873989506d07fa442b7bf7f34e59e7fd6839"} Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.118522 4982 generic.go:334] "Generic (PLEG): container finished" podID="8ff525c5-fa0e-4b23-8f5f-b476ba47a554" containerID="c068b1415e17e060c982cf63f1baa9a6b6d571a35c7470beea66160e553629b4" exitCode=0 Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.118565 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" event={"ID":"8ff525c5-fa0e-4b23-8f5f-b476ba47a554","Type":"ContainerDied","Data":"c068b1415e17e060c982cf63f1baa9a6b6d571a35c7470beea66160e553629b4"} Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.228923 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.231862 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkc2p\" (UniqueName: \"kubernetes.io/projected/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-kube-api-access-zkc2p\") pod \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.231980 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-serving-cert\") pod \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.232063 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-config\") pod \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.232085 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-client-ca\") pod \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\" (UID: \"8ff525c5-fa0e-4b23-8f5f-b476ba47a554\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.233568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-client-ca" (OuterVolumeSpecName: "client-ca") pod "8ff525c5-fa0e-4b23-8f5f-b476ba47a554" (UID: "8ff525c5-fa0e-4b23-8f5f-b476ba47a554"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.235607 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.238847 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-kube-api-access-zkc2p" (OuterVolumeSpecName: "kube-api-access-zkc2p") pod "8ff525c5-fa0e-4b23-8f5f-b476ba47a554" (UID: "8ff525c5-fa0e-4b23-8f5f-b476ba47a554"). InnerVolumeSpecName "kube-api-access-zkc2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.239113 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8ff525c5-fa0e-4b23-8f5f-b476ba47a554" (UID: "8ff525c5-fa0e-4b23-8f5f-b476ba47a554"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.239170 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-config" (OuterVolumeSpecName: "config") pod "8ff525c5-fa0e-4b23-8f5f-b476ba47a554" (UID: "8ff525c5-fa0e-4b23-8f5f-b476ba47a554"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332503 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-client-ca\") pod \"11f697a3-093c-421b-a727-c452fcc95745\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332551 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-proxy-ca-bundles\") pod \"11f697a3-093c-421b-a727-c452fcc95745\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332649 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-config\") pod \"11f697a3-093c-421b-a727-c452fcc95745\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332677 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz5gc\" (UniqueName: \"kubernetes.io/projected/11f697a3-093c-421b-a727-c452fcc95745-kube-api-access-dz5gc\") pod \"11f697a3-093c-421b-a727-c452fcc95745\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332702 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f697a3-093c-421b-a727-c452fcc95745-serving-cert\") pod \"11f697a3-093c-421b-a727-c452fcc95745\" (UID: \"11f697a3-093c-421b-a727-c452fcc95745\") " Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332826 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332836 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332845 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkc2p\" (UniqueName: \"kubernetes.io/projected/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-kube-api-access-zkc2p\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.332855 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ff525c5-fa0e-4b23-8f5f-b476ba47a554-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.333408 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "11f697a3-093c-421b-a727-c452fcc95745" (UID: "11f697a3-093c-421b-a727-c452fcc95745"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.333525 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-config" (OuterVolumeSpecName: "config") pod "11f697a3-093c-421b-a727-c452fcc95745" (UID: "11f697a3-093c-421b-a727-c452fcc95745"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.333657 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-client-ca" (OuterVolumeSpecName: "client-ca") pod "11f697a3-093c-421b-a727-c452fcc95745" (UID: "11f697a3-093c-421b-a727-c452fcc95745"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.337486 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f697a3-093c-421b-a727-c452fcc95745-kube-api-access-dz5gc" (OuterVolumeSpecName: "kube-api-access-dz5gc") pod "11f697a3-093c-421b-a727-c452fcc95745" (UID: "11f697a3-093c-421b-a727-c452fcc95745"). InnerVolumeSpecName "kube-api-access-dz5gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.337824 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f697a3-093c-421b-a727-c452fcc95745-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "11f697a3-093c-421b-a727-c452fcc95745" (UID: "11f697a3-093c-421b-a727-c452fcc95745"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.433548 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-config\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.433588 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz5gc\" (UniqueName: \"kubernetes.io/projected/11f697a3-093c-421b-a727-c452fcc95745-kube-api-access-dz5gc\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.433598 4982 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11f697a3-093c-421b-a727-c452fcc95745-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.433608 4982 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:06 crc kubenswrapper[4982]: I0123 07:59:06.433616 4982 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11f697a3-093c-421b-a727-c452fcc95745-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.072790 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7"] Jan 23 07:59:07 crc kubenswrapper[4982]: E0123 07:59:07.073099 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="extract-content" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073116 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="extract-content" Jan 23 07:59:07 crc kubenswrapper[4982]: E0123 07:59:07.073131 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="registry-server" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073140 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="registry-server" Jan 23 07:59:07 crc kubenswrapper[4982]: E0123 07:59:07.073152 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f697a3-093c-421b-a727-c452fcc95745" containerName="controller-manager" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073160 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f697a3-093c-421b-a727-c452fcc95745" containerName="controller-manager" Jan 23 07:59:07 crc kubenswrapper[4982]: E0123 07:59:07.073168 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="extract-utilities" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073176 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="extract-utilities" Jan 23 07:59:07 crc kubenswrapper[4982]: E0123 07:59:07.073197 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff525c5-fa0e-4b23-8f5f-b476ba47a554" containerName="route-controller-manager" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073206 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff525c5-fa0e-4b23-8f5f-b476ba47a554" containerName="route-controller-manager" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073353 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff525c5-fa0e-4b23-8f5f-b476ba47a554" containerName="route-controller-manager" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073370 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="19faede0-5207-4753-846b-4e4975ebc03a" containerName="registry-server" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073380 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f697a3-093c-421b-a727-c452fcc95745" containerName="controller-manager" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.073896 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.079007 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7677679df7-mgb8d"] Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.079783 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.082425 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7677679df7-mgb8d"] Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.097307 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7"] Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.126255 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" event={"ID":"8ff525c5-fa0e-4b23-8f5f-b476ba47a554","Type":"ContainerDied","Data":"483f8afe93758d1e15a1a00c1d862754f348a3eddc4e26aeb830002ff9b2ea82"} Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.126301 4982 scope.go:117] "RemoveContainer" containerID="c068b1415e17e060c982cf63f1baa9a6b6d571a35c7470beea66160e553629b4" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.126383 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.128857 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" event={"ID":"11f697a3-093c-421b-a727-c452fcc95745","Type":"ContainerDied","Data":"b6ed24a42fa24e99573eab1cc79cb6c7ef47df2a916800d5fa6f5dabf191fb77"} Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.128913 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f67648c7-lmn2x" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.146008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-config\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.146055 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-client-ca\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.146089 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctbn6\" (UniqueName: \"kubernetes.io/projected/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-kube-api-access-ctbn6\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.146126 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-serving-cert\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.158015 4982 scope.go:117] "RemoveContainer" containerID="8901ba56a87bcdbabf554c3c8195873989506d07fa442b7bf7f34e59e7fd6839" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.168198 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn"] Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.171138 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b5994dfc9-g8wkn"] Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.178828 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f67648c7-lmn2x"] Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.181535 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f67648c7-lmn2x"] Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247396 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjcl\" (UniqueName: \"kubernetes.io/projected/cca06715-e18b-4241-a1c6-9bff76f266fe-kube-api-access-jvjcl\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247454 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cca06715-e18b-4241-a1c6-9bff76f266fe-serving-cert\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247505 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-config\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247528 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-client-ca\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247553 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-config\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247575 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-proxy-ca-bundles\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247609 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctbn6\" (UniqueName: \"kubernetes.io/projected/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-kube-api-access-ctbn6\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247670 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-client-ca\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.247702 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-serving-cert\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.248543 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-config\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.249058 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-client-ca\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.251832 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-serving-cert\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.265565 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctbn6\" (UniqueName: \"kubernetes.io/projected/5abf639f-3a14-4429-ad7e-6f21a2c8ce4e-kube-api-access-ctbn6\") pod \"route-controller-manager-6b6ccddd7f-4whl7\" (UID: \"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e\") " pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.348390 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-client-ca\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.348469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjcl\" (UniqueName: \"kubernetes.io/projected/cca06715-e18b-4241-a1c6-9bff76f266fe-kube-api-access-jvjcl\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.348489 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cca06715-e18b-4241-a1c6-9bff76f266fe-serving-cert\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.348519 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-config\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.348537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-proxy-ca-bundles\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.349463 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-client-ca\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.349639 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-proxy-ca-bundles\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.349993 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cca06715-e18b-4241-a1c6-9bff76f266fe-config\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.351732 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cca06715-e18b-4241-a1c6-9bff76f266fe-serving-cert\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.369129 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjcl\" (UniqueName: \"kubernetes.io/projected/cca06715-e18b-4241-a1c6-9bff76f266fe-kube-api-access-jvjcl\") pod \"controller-manager-7677679df7-mgb8d\" (UID: \"cca06715-e18b-4241-a1c6-9bff76f266fe\") " pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.399789 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.411245 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.665491 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7677679df7-mgb8d"] Jan 23 07:59:07 crc kubenswrapper[4982]: W0123 07:59:07.669747 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcca06715_e18b_4241_a1c6_9bff76f266fe.slice/crio-54d2a128e7ff398345500b411b84166915d6256a027510665370f9ce35aec10b WatchSource:0}: Error finding container 54d2a128e7ff398345500b411b84166915d6256a027510665370f9ce35aec10b: Status 404 returned error can't find the container with id 54d2a128e7ff398345500b411b84166915d6256a027510665370f9ce35aec10b Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.801319 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7"] Jan 23 07:59:07 crc kubenswrapper[4982]: W0123 07:59:07.809845 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5abf639f_3a14_4429_ad7e_6f21a2c8ce4e.slice/crio-86ed31e567c75f00c231a52f93e21f709e8891fb9d25b4759c300a8f745cdb15 WatchSource:0}: Error finding container 86ed31e567c75f00c231a52f93e21f709e8891fb9d25b4759c300a8f745cdb15: Status 404 returned error can't find the container with id 86ed31e567c75f00c231a52f93e21f709e8891fb9d25b4759c300a8f745cdb15 Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.835310 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f697a3-093c-421b-a727-c452fcc95745" path="/var/lib/kubelet/pods/11f697a3-093c-421b-a727-c452fcc95745/volumes" Jan 23 07:59:07 crc kubenswrapper[4982]: I0123 07:59:07.836251 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff525c5-fa0e-4b23-8f5f-b476ba47a554" path="/var/lib/kubelet/pods/8ff525c5-fa0e-4b23-8f5f-b476ba47a554/volumes" Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.137803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" event={"ID":"cca06715-e18b-4241-a1c6-9bff76f266fe","Type":"ContainerStarted","Data":"fa240e4087714cab42ed717c1ea26806e19a108815537425f766032cecd9115f"} Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.137852 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" event={"ID":"cca06715-e18b-4241-a1c6-9bff76f266fe","Type":"ContainerStarted","Data":"54d2a128e7ff398345500b411b84166915d6256a027510665370f9ce35aec10b"} Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.138836 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.152673 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" event={"ID":"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e","Type":"ContainerStarted","Data":"3c47e12cb0de037914f391664f9a58d502aaec38d62817e87c378387f09bd5a9"} Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.152705 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" event={"ID":"5abf639f-3a14-4429-ad7e-6f21a2c8ce4e","Type":"ContainerStarted","Data":"86ed31e567c75f00c231a52f93e21f709e8891fb9d25b4759c300a8f745cdb15"} Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.153124 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.153300 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.169808 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" podStartSLOduration=3.169791971 podStartE2EDuration="3.169791971s" podCreationTimestamp="2026-01-23 07:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:59:08.166817391 +0000 UTC m=+222.647180777" watchObservedRunningTime="2026-01-23 07:59:08.169791971 +0000 UTC m=+222.650155357" Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.204111 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" podStartSLOduration=3.204088585 podStartE2EDuration="3.204088585s" podCreationTimestamp="2026-01-23 07:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:59:08.201058913 +0000 UTC m=+222.681422319" watchObservedRunningTime="2026-01-23 07:59:08.204088585 +0000 UTC m=+222.684451971" Jan 23 07:59:08 crc kubenswrapper[4982]: I0123 07:59:08.700160 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b6ccddd7f-4whl7" Jan 23 07:59:11 crc kubenswrapper[4982]: I0123 07:59:11.436290 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 07:59:11 crc kubenswrapper[4982]: I0123 07:59:11.436786 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 07:59:11 crc kubenswrapper[4982]: I0123 07:59:11.436855 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 07:59:11 crc kubenswrapper[4982]: I0123 07:59:11.437526 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 07:59:11 crc kubenswrapper[4982]: I0123 07:59:11.437611 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a" gracePeriod=600 Jan 23 07:59:12 crc kubenswrapper[4982]: I0123 07:59:12.182333 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a" exitCode=0 Jan 23 07:59:12 crc kubenswrapper[4982]: I0123 07:59:12.182409 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a"} Jan 23 07:59:13 crc kubenswrapper[4982]: I0123 07:59:13.192563 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"7bc7054b31b8357316441eebf4f999fc630e05a95e4ee23929f60ab52fc5577f"} Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.783337 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hljdw"] Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.784435 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hljdw" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="registry-server" containerID="cri-o://93319a044b667ca77e1812fe5dafb131eefdfa01bd19afa9caa35da26c442b49" gracePeriod=30 Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.793276 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7j6f"] Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.793597 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7j6f" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="registry-server" containerID="cri-o://c4881b02c0d38e4dd549f3804ba958cb520d297080d5fe309ee941309cb025ed" gracePeriod=30 Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.801028 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rvcj"] Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.801229 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" podUID="dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" containerName="marketplace-operator" containerID="cri-o://420d82d16393b96d366944fe37455fbf59201e2a61e974b72fbfc3a11c32dded" gracePeriod=30 Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.814484 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79l5k"] Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.814750 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-79l5k" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="registry-server" containerID="cri-o://388e7a703050c17e98728877ed36dbe0172090ff779f5416ca9a29a9d3d3d18f" gracePeriod=30 Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.857417 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55vpx"] Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.857497 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-58lhz"] Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.858125 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55vpx" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="registry-server" containerID="cri-o://fd06a92141e7e1a9b2f782938a2e0e1bb791c79eaf9bf6217223206b43a9ee4a" gracePeriod=30 Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.860009 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-58lhz"] Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.860305 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.892660 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd59f7ae-601a-424d-8800-d30e25869a77-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.892714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmdt\" (UniqueName: \"kubernetes.io/projected/dd59f7ae-601a-424d-8800-d30e25869a77-kube-api-access-gfmdt\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.892824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd59f7ae-601a-424d-8800-d30e25869a77-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.994143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd59f7ae-601a-424d-8800-d30e25869a77-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.994218 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd59f7ae-601a-424d-8800-d30e25869a77-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.994257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfmdt\" (UniqueName: \"kubernetes.io/projected/dd59f7ae-601a-424d-8800-d30e25869a77-kube-api-access-gfmdt\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:15 crc kubenswrapper[4982]: I0123 07:59:15.996781 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd59f7ae-601a-424d-8800-d30e25869a77-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.019617 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfmdt\" (UniqueName: \"kubernetes.io/projected/dd59f7ae-601a-424d-8800-d30e25869a77-kube-api-access-gfmdt\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.019778 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd59f7ae-601a-424d-8800-d30e25869a77-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-58lhz\" (UID: \"dd59f7ae-601a-424d-8800-d30e25869a77\") " pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.226340 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.232016 4982 generic.go:334] "Generic (PLEG): container finished" podID="dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" containerID="420d82d16393b96d366944fe37455fbf59201e2a61e974b72fbfc3a11c32dded" exitCode=0 Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.232105 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" event={"ID":"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef","Type":"ContainerDied","Data":"420d82d16393b96d366944fe37455fbf59201e2a61e974b72fbfc3a11c32dded"} Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.239197 4982 generic.go:334] "Generic (PLEG): container finished" podID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerID="93319a044b667ca77e1812fe5dafb131eefdfa01bd19afa9caa35da26c442b49" exitCode=0 Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.239260 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hljdw" event={"ID":"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc","Type":"ContainerDied","Data":"93319a044b667ca77e1812fe5dafb131eefdfa01bd19afa9caa35da26c442b49"} Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.242126 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerID="c4881b02c0d38e4dd549f3804ba958cb520d297080d5fe309ee941309cb025ed" exitCode=0 Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.242180 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7j6f" event={"ID":"5f5ba8a5-d454-48ef-936b-be7618f02695","Type":"ContainerDied","Data":"c4881b02c0d38e4dd549f3804ba958cb520d297080d5fe309ee941309cb025ed"} Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.245564 4982 generic.go:334] "Generic (PLEG): container finished" podID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerID="fd06a92141e7e1a9b2f782938a2e0e1bb791c79eaf9bf6217223206b43a9ee4a" exitCode=0 Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.245682 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55vpx" event={"ID":"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e","Type":"ContainerDied","Data":"fd06a92141e7e1a9b2f782938a2e0e1bb791c79eaf9bf6217223206b43a9ee4a"} Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.249172 4982 generic.go:334] "Generic (PLEG): container finished" podID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerID="388e7a703050c17e98728877ed36dbe0172090ff779f5416ca9a29a9d3d3d18f" exitCode=0 Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.249220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79l5k" event={"ID":"df2d9461-97c9-4460-84bf-538f6cd8a5ae","Type":"ContainerDied","Data":"388e7a703050c17e98728877ed36dbe0172090ff779f5416ca9a29a9d3d3d18f"} Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.393721 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.433457 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.442140 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.529419 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54dt6\" (UniqueName: \"kubernetes.io/projected/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-kube-api-access-54dt6\") pod \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.529525 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-trusted-ca\") pod \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.529576 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-operator-metrics\") pod \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\" (UID: \"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.533722 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" (UID: "dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.539768 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" (UID: "dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.541749 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-kube-api-access-54dt6" (OuterVolumeSpecName: "kube-api-access-54dt6") pod "dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" (UID: "dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef"). InnerVolumeSpecName "kube-api-access-54dt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.630693 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-catalog-content\") pod \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.630749 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-catalog-content\") pod \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.630780 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vpzr\" (UniqueName: \"kubernetes.io/projected/df2d9461-97c9-4460-84bf-538f6cd8a5ae-kube-api-access-8vpzr\") pod \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.630827 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-utilities\") pod \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\" (UID: \"df2d9461-97c9-4460-84bf-538f6cd8a5ae\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.630855 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh6k6\" (UniqueName: \"kubernetes.io/projected/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-kube-api-access-lh6k6\") pod \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.630882 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-utilities\") pod \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\" (UID: \"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e\") " Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.631171 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.631188 4982 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.631200 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54dt6\" (UniqueName: \"kubernetes.io/projected/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef-kube-api-access-54dt6\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.632095 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-utilities" (OuterVolumeSpecName: "utilities") pod "b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" (UID: "b9b7ab81-833f-45c4-9e5a-7a1b2615de7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.633176 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-utilities" (OuterVolumeSpecName: "utilities") pod "df2d9461-97c9-4460-84bf-538f6cd8a5ae" (UID: "df2d9461-97c9-4460-84bf-538f6cd8a5ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.635663 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-kube-api-access-lh6k6" (OuterVolumeSpecName: "kube-api-access-lh6k6") pod "b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" (UID: "b9b7ab81-833f-45c4-9e5a-7a1b2615de7e"). InnerVolumeSpecName "kube-api-access-lh6k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.636030 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2d9461-97c9-4460-84bf-538f6cd8a5ae-kube-api-access-8vpzr" (OuterVolumeSpecName: "kube-api-access-8vpzr") pod "df2d9461-97c9-4460-84bf-538f6cd8a5ae" (UID: "df2d9461-97c9-4460-84bf-538f6cd8a5ae"). InnerVolumeSpecName "kube-api-access-8vpzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.663867 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df2d9461-97c9-4460-84bf-538f6cd8a5ae" (UID: "df2d9461-97c9-4460-84bf-538f6cd8a5ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.678888 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-58lhz"] Jan 23 07:59:16 crc kubenswrapper[4982]: W0123 07:59:16.688351 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd59f7ae_601a_424d_8800_d30e25869a77.slice/crio-f15810e7ab1076c9bb13f31d55e8bd1db73f41773941539a576454e83d31015c WatchSource:0}: Error finding container f15810e7ab1076c9bb13f31d55e8bd1db73f41773941539a576454e83d31015c: Status 404 returned error can't find the container with id f15810e7ab1076c9bb13f31d55e8bd1db73f41773941539a576454e83d31015c Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.738219 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.738291 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh6k6\" (UniqueName: \"kubernetes.io/projected/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-kube-api-access-lh6k6\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.738302 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.738310 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df2d9461-97c9-4460-84bf-538f6cd8a5ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.738322 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vpzr\" (UniqueName: \"kubernetes.io/projected/df2d9461-97c9-4460-84bf-538f6cd8a5ae-kube-api-access-8vpzr\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.764888 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" (UID: "b9b7ab81-833f-45c4-9e5a-7a1b2615de7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.839783 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.895075 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:59:16 crc kubenswrapper[4982]: I0123 07:59:16.992123 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.042657 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-catalog-content\") pod \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.042724 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w7kc\" (UniqueName: \"kubernetes.io/projected/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-kube-api-access-5w7kc\") pod \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.042788 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-utilities\") pod \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\" (UID: \"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc\") " Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.043911 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-utilities" (OuterVolumeSpecName: "utilities") pod "0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" (UID: "0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.049131 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-kube-api-access-5w7kc" (OuterVolumeSpecName: "kube-api-access-5w7kc") pod "0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" (UID: "0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc"). InnerVolumeSpecName "kube-api-access-5w7kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.099571 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" (UID: "0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.143963 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-catalog-content\") pod \"5f5ba8a5-d454-48ef-936b-be7618f02695\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.144024 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf5k7\" (UniqueName: \"kubernetes.io/projected/5f5ba8a5-d454-48ef-936b-be7618f02695-kube-api-access-xf5k7\") pod \"5f5ba8a5-d454-48ef-936b-be7618f02695\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.144696 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-utilities\") pod \"5f5ba8a5-d454-48ef-936b-be7618f02695\" (UID: \"5f5ba8a5-d454-48ef-936b-be7618f02695\") " Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.145143 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.145166 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w7kc\" (UniqueName: \"kubernetes.io/projected/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-kube-api-access-5w7kc\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.145181 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.145464 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-utilities" (OuterVolumeSpecName: "utilities") pod "5f5ba8a5-d454-48ef-936b-be7618f02695" (UID: "5f5ba8a5-d454-48ef-936b-be7618f02695"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.147556 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5ba8a5-d454-48ef-936b-be7618f02695-kube-api-access-xf5k7" (OuterVolumeSpecName: "kube-api-access-xf5k7") pod "5f5ba8a5-d454-48ef-936b-be7618f02695" (UID: "5f5ba8a5-d454-48ef-936b-be7618f02695"). InnerVolumeSpecName "kube-api-access-xf5k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.190424 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f5ba8a5-d454-48ef-936b-be7618f02695" (UID: "5f5ba8a5-d454-48ef-936b-be7618f02695"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.245827 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf5k7\" (UniqueName: \"kubernetes.io/projected/5f5ba8a5-d454-48ef-936b-be7618f02695-kube-api-access-xf5k7\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.245863 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.245873 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5ba8a5-d454-48ef-936b-be7618f02695-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.258022 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7j6f" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.258057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7j6f" event={"ID":"5f5ba8a5-d454-48ef-936b-be7618f02695","Type":"ContainerDied","Data":"59a08d4c1f3963628b5f164c56577c3be8ad33cb03a9118390ce4577dea32c6c"} Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.258138 4982 scope.go:117] "RemoveContainer" containerID="c4881b02c0d38e4dd549f3804ba958cb520d297080d5fe309ee941309cb025ed" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.263834 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55vpx" event={"ID":"b9b7ab81-833f-45c4-9e5a-7a1b2615de7e","Type":"ContainerDied","Data":"b34b4e15aa9b66ab8a0a7ef63cfee6465dd117c6d44bb7a87b378f94d54f2c82"} Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.263910 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55vpx" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.267958 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-79l5k" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.267955 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-79l5k" event={"ID":"df2d9461-97c9-4460-84bf-538f6cd8a5ae","Type":"ContainerDied","Data":"16f2f4b3ae16445b16682dbf54cd05b5cae847a31a48ab865f14e1dfcf391cf6"} Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.270157 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.270886 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9rvcj" event={"ID":"dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef","Type":"ContainerDied","Data":"1389c1a34cc2cc63e75fa7b56684a468297614298df7b9c78a3c7830b7b042ab"} Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.273971 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" event={"ID":"dd59f7ae-601a-424d-8800-d30e25869a77","Type":"ContainerStarted","Data":"7e93d49117cf4c6adb90d3dd250495d374e3b662b2f4121f970c0414a702d962"} Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.274048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" event={"ID":"dd59f7ae-601a-424d-8800-d30e25869a77","Type":"ContainerStarted","Data":"f15810e7ab1076c9bb13f31d55e8bd1db73f41773941539a576454e83d31015c"} Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.274363 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.282099 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hljdw" event={"ID":"0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc","Type":"ContainerDied","Data":"52f02ccc7659a6d4010d0108797dad3d1950ee4ae5801dd7d7ea90bf740b7acd"} Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.282166 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hljdw" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.283672 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.284184 4982 scope.go:117] "RemoveContainer" containerID="032e5013b9a0053ea75fab6088f08c5974b4208abe52176d5c13c249645b8714" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.305325 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7j6f"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.309402 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7j6f"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.317827 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-58lhz" podStartSLOduration=2.317801918 podStartE2EDuration="2.317801918s" podCreationTimestamp="2026-01-23 07:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 07:59:17.314279253 +0000 UTC m=+231.794642639" watchObservedRunningTime="2026-01-23 07:59:17.317801918 +0000 UTC m=+231.798165304" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.331892 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55vpx"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.332611 4982 scope.go:117] "RemoveContainer" containerID="6549c9f06964e72a37ad46511a06bfc1da99e649ebb1c8ae86c3098b33daee59" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.340746 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55vpx"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.368335 4982 scope.go:117] "RemoveContainer" containerID="fd06a92141e7e1a9b2f782938a2e0e1bb791c79eaf9bf6217223206b43a9ee4a" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.383986 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rvcj"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.392045 4982 scope.go:117] "RemoveContainer" containerID="2c0b6b2823361f78495cad93e4b1a0d16ddf6e0416aec38ca44278427e6236ad" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.392836 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9rvcj"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.411215 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hljdw"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.416650 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hljdw"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.417251 4982 scope.go:117] "RemoveContainer" containerID="8081a691945e158fc6ac3c10b660d937c3cd43aefaff7b8ae89327aa4da6f5fa" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.439325 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-79l5k"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.445363 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-79l5k"] Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.448043 4982 scope.go:117] "RemoveContainer" containerID="388e7a703050c17e98728877ed36dbe0172090ff779f5416ca9a29a9d3d3d18f" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.472912 4982 scope.go:117] "RemoveContainer" containerID="58b451ec9b93aab66e62b8436590fa0b579b52724024493f94564a76124693c6" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.487796 4982 scope.go:117] "RemoveContainer" containerID="1c3f1366e6c8352d9f8d9353536b3c0488e738ef5cf3ffa56cecedfea262c1f7" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.504032 4982 scope.go:117] "RemoveContainer" containerID="420d82d16393b96d366944fe37455fbf59201e2a61e974b72fbfc3a11c32dded" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.518190 4982 scope.go:117] "RemoveContainer" containerID="93319a044b667ca77e1812fe5dafb131eefdfa01bd19afa9caa35da26c442b49" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.536376 4982 scope.go:117] "RemoveContainer" containerID="20af36577fe950b992090d8dfa21a63742d75d007f577d8fd471d05006fe56c2" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.553672 4982 scope.go:117] "RemoveContainer" containerID="db50853570d0e3b672d28fba73787e94994caa9ba26a2956ba73ca0ffa00c4a3" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.844891 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" path="/var/lib/kubelet/pods/0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc/volumes" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.846465 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" path="/var/lib/kubelet/pods/5f5ba8a5-d454-48ef-936b-be7618f02695/volumes" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.847665 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" path="/var/lib/kubelet/pods/b9b7ab81-833f-45c4-9e5a-7a1b2615de7e/volumes" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.848976 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" path="/var/lib/kubelet/pods/dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef/volumes" Jan 23 07:59:17 crc kubenswrapper[4982]: I0123 07:59:17.850847 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" path="/var/lib/kubelet/pods/df2d9461-97c9-4460-84bf-538f6cd8a5ae/volumes" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006157 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rpb7j"] Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006398 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006414 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006424 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006432 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006442 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006451 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006469 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006477 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006489 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006498 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006506 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006514 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006528 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" containerName="marketplace-operator" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006536 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" containerName="marketplace-operator" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006546 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006553 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006566 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006576 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006590 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006599 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006611 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006636 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="extract-content" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006649 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006658 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: E0123 07:59:18.006669 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006677 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="extract-utilities" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006788 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5ba8a5-d454-48ef-936b-be7618f02695" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006804 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1f7f7f-7dd6-4d33-93e1-c777edb2c4cc" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006817 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b7ab81-833f-45c4-9e5a-7a1b2615de7e" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006828 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde2e4f6-e31f-4ccb-9d70-0c011e6be5ef" containerName="marketplace-operator" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.006850 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2d9461-97c9-4460-84bf-538f6cd8a5ae" containerName="registry-server" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.007748 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.012064 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.023785 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpb7j"] Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.065714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c7860e-1211-4fac-afcc-79f906305ebe-utilities\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.065794 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c7860e-1211-4fac-afcc-79f906305ebe-catalog-content\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.065892 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqvq\" (UniqueName: \"kubernetes.io/projected/b4c7860e-1211-4fac-afcc-79f906305ebe-kube-api-access-xkqvq\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.167193 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqvq\" (UniqueName: \"kubernetes.io/projected/b4c7860e-1211-4fac-afcc-79f906305ebe-kube-api-access-xkqvq\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.167308 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c7860e-1211-4fac-afcc-79f906305ebe-utilities\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.167378 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c7860e-1211-4fac-afcc-79f906305ebe-catalog-content\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.168098 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c7860e-1211-4fac-afcc-79f906305ebe-catalog-content\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.168217 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c7860e-1211-4fac-afcc-79f906305ebe-utilities\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.196135 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqvq\" (UniqueName: \"kubernetes.io/projected/b4c7860e-1211-4fac-afcc-79f906305ebe-kube-api-access-xkqvq\") pod \"redhat-marketplace-rpb7j\" (UID: \"b4c7860e-1211-4fac-afcc-79f906305ebe\") " pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.207404 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5pqbt"] Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.212492 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.214580 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.215749 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pqbt"] Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.268078 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-utilities\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.268137 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfzj4\" (UniqueName: \"kubernetes.io/projected/2678d34c-2a1d-43ae-bc45-7c7763a89190-kube-api-access-tfzj4\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.268360 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-catalog-content\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.332225 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.369412 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-catalog-content\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.369480 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-utilities\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.369517 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfzj4\" (UniqueName: \"kubernetes.io/projected/2678d34c-2a1d-43ae-bc45-7c7763a89190-kube-api-access-tfzj4\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.369952 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-catalog-content\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.370150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-utilities\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.397580 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfzj4\" (UniqueName: \"kubernetes.io/projected/2678d34c-2a1d-43ae-bc45-7c7763a89190-kube-api-access-tfzj4\") pod \"certified-operators-5pqbt\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.552340 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.757086 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rpb7j"] Jan 23 07:59:18 crc kubenswrapper[4982]: I0123 07:59:18.961718 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pqbt"] Jan 23 07:59:19 crc kubenswrapper[4982]: I0123 07:59:19.300650 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4c7860e-1211-4fac-afcc-79f906305ebe" containerID="e6ff8a5a7f317d026a4125f1b69a960c5f2ecca0c9a11ffa1e7dd9f70191c959" exitCode=0 Jan 23 07:59:19 crc kubenswrapper[4982]: I0123 07:59:19.300899 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpb7j" event={"ID":"b4c7860e-1211-4fac-afcc-79f906305ebe","Type":"ContainerDied","Data":"e6ff8a5a7f317d026a4125f1b69a960c5f2ecca0c9a11ffa1e7dd9f70191c959"} Jan 23 07:59:19 crc kubenswrapper[4982]: I0123 07:59:19.301078 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpb7j" event={"ID":"b4c7860e-1211-4fac-afcc-79f906305ebe","Type":"ContainerStarted","Data":"e80a6621416c53d85fc0dd7e3ec777da7692b18fee058bab35d1701441239f08"} Jan 23 07:59:19 crc kubenswrapper[4982]: I0123 07:59:19.306706 4982 generic.go:334] "Generic (PLEG): container finished" podID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerID="b5fd32d3f33f9b304d865ee1dc5bd1638c725c970a6e81a328cc59fd01ba6cb1" exitCode=0 Jan 23 07:59:19 crc kubenswrapper[4982]: I0123 07:59:19.306753 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pqbt" event={"ID":"2678d34c-2a1d-43ae-bc45-7c7763a89190","Type":"ContainerDied","Data":"b5fd32d3f33f9b304d865ee1dc5bd1638c725c970a6e81a328cc59fd01ba6cb1"} Jan 23 07:59:19 crc kubenswrapper[4982]: I0123 07:59:19.306805 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pqbt" event={"ID":"2678d34c-2a1d-43ae-bc45-7c7763a89190","Type":"ContainerStarted","Data":"f44d437db05246404b081c988f8444dcd43e70645eaf44c0c7e9d3a8a9b50243"} Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.285539 4982 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.286744 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663" gracePeriod=15 Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.287069 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858" gracePeriod=15 Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.287149 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79" gracePeriod=15 Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.287218 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a" gracePeriod=15 Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.287326 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd" gracePeriod=15 Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.287336 4982 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.287938 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.287982 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.288018 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288032 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.288053 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288066 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.288096 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288109 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.288126 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288138 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.288161 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288175 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.288193 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288205 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288383 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288421 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288444 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288467 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288488 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.288510 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.291488 4982 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.292671 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.299521 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.321470 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4c7860e-1211-4fac-afcc-79f906305ebe" containerID="db8113604da4099c59c0be0bc6735c6b4c6b7c712926ee93da0cbdb42a8d13d1" exitCode=0 Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.321848 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpb7j" event={"ID":"b4c7860e-1211-4fac-afcc-79f906305ebe","Type":"ContainerDied","Data":"db8113604da4099c59c0be0bc6735c6b4c6b7c712926ee93da0cbdb42a8d13d1"} Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.330219 4982 generic.go:334] "Generic (PLEG): container finished" podID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerID="7b160df9479a77e9bc75e31a842de3cb1df5b76c90e79c6b86de24c0445f669a" exitCode=0 Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.330332 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pqbt" event={"ID":"2678d34c-2a1d-43ae-bc45-7c7763a89190","Type":"ContainerDied","Data":"7b160df9479a77e9bc75e31a842de3cb1df5b76c90e79c6b86de24c0445f669a"} Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.358078 4982 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.168:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.410444 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.410973 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.411018 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.411065 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.411094 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.411135 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.411188 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.411729 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.513816 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.513876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.513901 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.513920 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.513954 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.513970 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.513985 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514041 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514004 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514077 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514124 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514147 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514169 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514193 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.514271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.565018 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.565974 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.566565 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.567161 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.567679 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.567723 4982 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.568120 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="200ms" Jan 23 07:59:20 crc kubenswrapper[4982]: I0123 07:59:20.659480 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:20 crc kubenswrapper[4982]: W0123 07:59:20.682363 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-35fc2ef5b4053b0ab402ad3756821a8fbfd9e3df5c100c0943f9cdb0557d9006 WatchSource:0}: Error finding container 35fc2ef5b4053b0ab402ad3756821a8fbfd9e3df5c100c0943f9cdb0557d9006: Status 404 returned error can't find the container with id 35fc2ef5b4053b0ab402ad3756821a8fbfd9e3df5c100c0943f9cdb0557d9006 Jan 23 07:59:20 crc kubenswrapper[4982]: E0123 07:59:20.769178 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="400ms" Jan 23 07:59:21 crc kubenswrapper[4982]: E0123 07:59:21.170484 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="800ms" Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.340301 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pqbt" event={"ID":"2678d34c-2a1d-43ae-bc45-7c7763a89190","Type":"ContainerStarted","Data":"920ef0d8e4c64224ec06c7d8b192c386c6cedbae38822b844015951509a918ed"} Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.346887 4982 generic.go:334] "Generic (PLEG): container finished" podID="f6953a35-140d-4781-a533-4d6d61fd418d" containerID="7505c71f4479ceb1818aabb384b3724fb8a1e2cbd72f6ee0093497291f97bc28" exitCode=0 Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.346985 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6953a35-140d-4781-a533-4d6d61fd418d","Type":"ContainerDied","Data":"7505c71f4479ceb1818aabb384b3724fb8a1e2cbd72f6ee0093497291f97bc28"} Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.348969 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"30093051b502f056151b139efaf31f0e671b1839cf18d222c2d70b25f450a61d"} Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.349028 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"35fc2ef5b4053b0ab402ad3756821a8fbfd9e3df5c100c0943f9cdb0557d9006"} Jan 23 07:59:21 crc kubenswrapper[4982]: E0123 07:59:21.349768 4982 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.168:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.351665 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.353131 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.353902 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858" exitCode=0 Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.353937 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79" exitCode=0 Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.353946 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a" exitCode=0 Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.353958 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd" exitCode=2 Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.354042 4982 scope.go:117] "RemoveContainer" containerID="89b7d3a36f9128bce9fd75ca2c8ae9f7a677fba5cccd18b0d03a7377f6e3d6e6" Jan 23 07:59:21 crc kubenswrapper[4982]: I0123 07:59:21.356603 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rpb7j" event={"ID":"b4c7860e-1211-4fac-afcc-79f906305ebe","Type":"ContainerStarted","Data":"b103bed17340ad8860c42f141b41be0dec05adbb7598394312010c3fbb2eaa62"} Jan 23 07:59:21 crc kubenswrapper[4982]: E0123 07:59:21.971754 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="1.6s" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.366885 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.760768 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.843166 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-var-lock\") pod \"f6953a35-140d-4781-a533-4d6d61fd418d\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.843262 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6953a35-140d-4781-a533-4d6d61fd418d-kube-api-access\") pod \"f6953a35-140d-4781-a533-4d6d61fd418d\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.843386 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-kubelet-dir\") pod \"f6953a35-140d-4781-a533-4d6d61fd418d\" (UID: \"f6953a35-140d-4781-a533-4d6d61fd418d\") " Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.843423 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-var-lock" (OuterVolumeSpecName: "var-lock") pod "f6953a35-140d-4781-a533-4d6d61fd418d" (UID: "f6953a35-140d-4781-a533-4d6d61fd418d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.843659 4982 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.844017 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f6953a35-140d-4781-a533-4d6d61fd418d" (UID: "f6953a35-140d-4781-a533-4d6d61fd418d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.853799 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6953a35-140d-4781-a533-4d6d61fd418d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f6953a35-140d-4781-a533-4d6d61fd418d" (UID: "f6953a35-140d-4781-a533-4d6d61fd418d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.945012 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6953a35-140d-4781-a533-4d6d61fd418d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:22 crc kubenswrapper[4982]: I0123 07:59:22.945047 4982 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6953a35-140d-4781-a533-4d6d61fd418d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.144349 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.144973 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248465 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248526 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248543 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248573 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248607 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248645 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248957 4982 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248970 4982 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.248978 4982 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.384443 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.384447 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f6953a35-140d-4781-a533-4d6d61fd418d","Type":"ContainerDied","Data":"f339ff09a9b6295247e2149a245d72f4b9b0c5e4d845ac7562699806d8bce47f"} Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.384691 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f339ff09a9b6295247e2149a245d72f4b9b0c5e4d845ac7562699806d8bce47f" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.388835 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.389930 4982 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663" exitCode=0 Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.390006 4982 scope.go:117] "RemoveContainer" containerID="f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.390040 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.431339 4982 scope.go:117] "RemoveContainer" containerID="617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.452199 4982 scope.go:117] "RemoveContainer" containerID="bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.469990 4982 scope.go:117] "RemoveContainer" containerID="e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.484595 4982 scope.go:117] "RemoveContainer" containerID="ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.500338 4982 scope.go:117] "RemoveContainer" containerID="2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.522154 4982 scope.go:117] "RemoveContainer" containerID="f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858" Jan 23 07:59:23 crc kubenswrapper[4982]: E0123 07:59:23.522767 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\": container with ID starting with f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858 not found: ID does not exist" containerID="f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.522876 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858"} err="failed to get container status \"f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\": rpc error: code = NotFound desc = could not find container \"f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858\": container with ID starting with f584de84ad46292a271bdf2f11af44c6034d072c225755546e4a51aaf463c858 not found: ID does not exist" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.522924 4982 scope.go:117] "RemoveContainer" containerID="617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79" Jan 23 07:59:23 crc kubenswrapper[4982]: E0123 07:59:23.523753 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\": container with ID starting with 617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79 not found: ID does not exist" containerID="617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.523789 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79"} err="failed to get container status \"617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\": rpc error: code = NotFound desc = could not find container \"617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79\": container with ID starting with 617601f3a15bb011e5683afdce47314be1c804adaa623d26c0c398b914c6ac79 not found: ID does not exist" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.523816 4982 scope.go:117] "RemoveContainer" containerID="bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a" Jan 23 07:59:23 crc kubenswrapper[4982]: E0123 07:59:23.524564 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\": container with ID starting with bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a not found: ID does not exist" containerID="bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.524592 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a"} err="failed to get container status \"bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\": rpc error: code = NotFound desc = could not find container \"bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a\": container with ID starting with bdc4131a75048ea34bb003aadeacde21f0e585388e47d0dcc1c54b4c5349e86a not found: ID does not exist" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.524608 4982 scope.go:117] "RemoveContainer" containerID="e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd" Jan 23 07:59:23 crc kubenswrapper[4982]: E0123 07:59:23.524918 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\": container with ID starting with e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd not found: ID does not exist" containerID="e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.524980 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd"} err="failed to get container status \"e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\": rpc error: code = NotFound desc = could not find container \"e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd\": container with ID starting with e7c6b9b92e7820405efc7957ff1ccb2ed70ce38fcbfb163ba78831f6009afdfd not found: ID does not exist" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.525038 4982 scope.go:117] "RemoveContainer" containerID="ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663" Jan 23 07:59:23 crc kubenswrapper[4982]: E0123 07:59:23.525426 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\": container with ID starting with ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663 not found: ID does not exist" containerID="ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.525465 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663"} err="failed to get container status \"ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\": rpc error: code = NotFound desc = could not find container \"ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663\": container with ID starting with ee3c9e406e3f5892ab4c9b7d08fad1147496999d5d91c6195a79904f9e1a3663 not found: ID does not exist" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.525487 4982 scope.go:117] "RemoveContainer" containerID="2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109" Jan 23 07:59:23 crc kubenswrapper[4982]: E0123 07:59:23.526230 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\": container with ID starting with 2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109 not found: ID does not exist" containerID="2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.526279 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109"} err="failed to get container status \"2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\": rpc error: code = NotFound desc = could not find container \"2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109\": container with ID starting with 2181519c7470cb6f52c6d0d0664f04f54861b8fd3f07fd255d3d18efbbbed109 not found: ID does not exist" Jan 23 07:59:23 crc kubenswrapper[4982]: E0123 07:59:23.572674 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="3.2s" Jan 23 07:59:23 crc kubenswrapper[4982]: I0123 07:59:23.837203 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 07:59:25 crc kubenswrapper[4982]: I0123 07:59:25.326087 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:25 crc kubenswrapper[4982]: I0123 07:59:25.327211 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:25 crc kubenswrapper[4982]: I0123 07:59:25.327874 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:25 crc kubenswrapper[4982]: I0123 07:59:25.328398 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:25 crc kubenswrapper[4982]: E0123 07:59:25.333067 4982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.168:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-rpb7j.188d4d3eb3b14537 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-rpb7j,UID:b4c7860e-1211-4fac-afcc-79f906305ebe,APIVersion:v1,ResourceVersion:29958,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 07:59:20.324523319 +0000 UTC m=+234.804886745,LastTimestamp:2026-01-23 07:59:20.324523319 +0000 UTC m=+234.804886745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 07:59:25 crc kubenswrapper[4982]: I0123 07:59:25.831756 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:25 crc kubenswrapper[4982]: I0123 07:59:25.832014 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:25 crc kubenswrapper[4982]: I0123 07:59:25.832249 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:25 crc kubenswrapper[4982]: E0123 07:59:25.911119 4982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.168:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-rpb7j.188d4d3eb3b14537 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-rpb7j,UID:b4c7860e-1211-4fac-afcc-79f906305ebe,APIVersion:v1,ResourceVersion:29958,FieldPath:spec.containers{registry-server},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 07:59:20.324523319 +0000 UTC m=+234.804886745,LastTimestamp:2026-01-23 07:59:20.324523319 +0000 UTC m=+234.804886745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 07:59:26 crc kubenswrapper[4982]: E0123 07:59:26.773878 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="6.4s" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.332667 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.333099 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.382300 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.383709 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.384082 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.384584 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.464530 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rpb7j" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.465538 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.465809 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.466138 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.553232 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.553288 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.603764 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.604746 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.605438 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: I0123 07:59:28.606064 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:28 crc kubenswrapper[4982]: E0123 07:59:28.921096 4982 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.168:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" volumeName="registry-storage" Jan 23 07:59:29 crc kubenswrapper[4982]: I0123 07:59:29.506920 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 07:59:29 crc kubenswrapper[4982]: I0123 07:59:29.507882 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:29 crc kubenswrapper[4982]: I0123 07:59:29.508962 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:29 crc kubenswrapper[4982]: I0123 07:59:29.509418 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:32 crc kubenswrapper[4982]: I0123 07:59:32.827173 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:32 crc kubenswrapper[4982]: I0123 07:59:32.828539 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:32 crc kubenswrapper[4982]: I0123 07:59:32.829208 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:32 crc kubenswrapper[4982]: I0123 07:59:32.829759 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:32 crc kubenswrapper[4982]: I0123 07:59:32.856024 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:32 crc kubenswrapper[4982]: I0123 07:59:32.856067 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:32 crc kubenswrapper[4982]: E0123 07:59:32.856716 4982 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:32 crc kubenswrapper[4982]: I0123 07:59:32.857597 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:32 crc kubenswrapper[4982]: W0123 07:59:32.889018 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-432e2418dc9872fcf12a28d05cc2621c355599cd47950061a4e8cb319ab6e1e0 WatchSource:0}: Error finding container 432e2418dc9872fcf12a28d05cc2621c355599cd47950061a4e8cb319ab6e1e0: Status 404 returned error can't find the container with id 432e2418dc9872fcf12a28d05cc2621c355599cd47950061a4e8cb319ab6e1e0 Jan 23 07:59:33 crc kubenswrapper[4982]: E0123 07:59:33.175537 4982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.168:6443: connect: connection refused" interval="7s" Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.464597 4982 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="fc84b4177b92758443a0b5d69c7693328a4ad849d092e1dc99647d2afb1c5cd1" exitCode=0 Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.464692 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"fc84b4177b92758443a0b5d69c7693328a4ad849d092e1dc99647d2afb1c5cd1"} Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.464807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"432e2418dc9872fcf12a28d05cc2621c355599cd47950061a4e8cb319ab6e1e0"} Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.465341 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.465393 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.465913 4982 status_manager.go:851] "Failed to get status for pod" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" pod="openshift-marketplace/certified-operators-5pqbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-5pqbt\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:33 crc kubenswrapper[4982]: E0123 07:59:33.466107 4982 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.466149 4982 status_manager.go:851] "Failed to get status for pod" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:33 crc kubenswrapper[4982]: I0123 07:59:33.466316 4982 status_manager.go:851] "Failed to get status for pod" podUID="b4c7860e-1211-4fac-afcc-79f906305ebe" pod="openshift-marketplace/redhat-marketplace-rpb7j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-rpb7j\": dial tcp 38.129.56.168:6443: connect: connection refused" Jan 23 07:59:34 crc kubenswrapper[4982]: I0123 07:59:34.476750 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 07:59:34 crc kubenswrapper[4982]: I0123 07:59:34.477692 4982 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1" exitCode=1 Jan 23 07:59:34 crc kubenswrapper[4982]: I0123 07:59:34.477799 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1"} Jan 23 07:59:34 crc kubenswrapper[4982]: I0123 07:59:34.478951 4982 scope.go:117] "RemoveContainer" containerID="049197a85c160ef685d0698f52f3086b67070bcebf7cf6f954e5dc7e85a61dc1" Jan 23 07:59:34 crc kubenswrapper[4982]: I0123 07:59:34.483817 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"297d2c2aee160e582ba882e805e257ce899fa44efc1f9e7bddba5564bf1554ec"} Jan 23 07:59:34 crc kubenswrapper[4982]: I0123 07:59:34.483855 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2ed7dd96122d64bf68f126c53e252abe1153e16afe256f3cdc48f2971ca6397b"} Jan 23 07:59:34 crc kubenswrapper[4982]: I0123 07:59:34.483865 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5d73d762f98c96006c4f5286b040ea110ecbf465bdb86389884f721c09830c22"} Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.365737 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.495021 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.495091 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c76ea3d1c1c7f1a028e14c1c793c716b6946b77f7ab1bfcab09a00b15ff87456"} Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.500218 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6ba8259e2a18e3dd7bc218a94ad6b0dd4d7d316815819a4682355803e65b2efb"} Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.500264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"014a2dc8ca6b61b6e270e5dc23c4048d613f2558403f8e7ec54486d3aa64791d"} Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.500468 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.500617 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:35 crc kubenswrapper[4982]: I0123 07:59:35.500668 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:36 crc kubenswrapper[4982]: I0123 07:59:36.034248 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:59:37 crc kubenswrapper[4982]: I0123 07:59:37.858275 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:37 crc kubenswrapper[4982]: I0123 07:59:37.858752 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:37 crc kubenswrapper[4982]: I0123 07:59:37.865048 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:40 crc kubenswrapper[4982]: I0123 07:59:40.509790 4982 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:40 crc kubenswrapper[4982]: I0123 07:59:40.528914 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:40 crc kubenswrapper[4982]: I0123 07:59:40.528954 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:40 crc kubenswrapper[4982]: I0123 07:59:40.533882 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 07:59:40 crc kubenswrapper[4982]: I0123 07:59:40.536243 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="048297f2-f688-4a30-9177-37b4a09f4e32" Jan 23 07:59:41 crc kubenswrapper[4982]: I0123 07:59:41.538031 4982 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:41 crc kubenswrapper[4982]: I0123 07:59:41.538546 4982 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d5501db4-cca7-46c4-a520-d9697236b48b" Jan 23 07:59:43 crc kubenswrapper[4982]: I0123 07:59:43.157991 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:59:43 crc kubenswrapper[4982]: I0123 07:59:43.161427 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:59:45 crc kubenswrapper[4982]: I0123 07:59:45.842837 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="048297f2-f688-4a30-9177-37b4a09f4e32" Jan 23 07:59:46 crc kubenswrapper[4982]: I0123 07:59:46.041797 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 07:59:51 crc kubenswrapper[4982]: I0123 07:59:51.957411 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 07:59:51 crc kubenswrapper[4982]: I0123 07:59:51.980509 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.010236 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.120898 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.127604 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.138777 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.203683 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.546778 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.546974 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.717195 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.741457 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.871014 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 07:59:52 crc kubenswrapper[4982]: I0123 07:59:52.890067 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.011961 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.023328 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.394381 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.447570 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.496572 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.737419 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.749870 4982 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.769913 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.848276 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 07:59:53 crc kubenswrapper[4982]: I0123 07:59:53.979077 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.064885 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.168965 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.382886 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.484101 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.485487 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.500479 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.578189 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.593777 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.767170 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 07:59:54 crc kubenswrapper[4982]: I0123 07:59:54.870871 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.023447 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.166029 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.302707 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.378973 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.379116 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.409740 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.542106 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.552983 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.644540 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.721041 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.790059 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 07:59:55 crc kubenswrapper[4982]: I0123 07:59:55.819871 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.000462 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.080093 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.101431 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.170151 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.178560 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.205002 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.267767 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.397664 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.424963 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.502396 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.511081 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.515512 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.565272 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.583214 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.647969 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.697032 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.726598 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.765451 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.803139 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.886686 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.888164 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 07:59:56 crc kubenswrapper[4982]: I0123 07:59:56.968221 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.013949 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.181229 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.190568 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.238959 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.252110 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.324738 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.326901 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.385010 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.406332 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.433817 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.438549 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.477590 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.511661 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.536925 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.540138 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.632727 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.696899 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.732723 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.758304 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.765381 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.767186 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.773832 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.825402 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.851504 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.857589 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.858852 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 07:59:57 crc kubenswrapper[4982]: I0123 07:59:57.984096 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.083222 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.112188 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.368583 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.488871 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.537395 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.681838 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.685666 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.775603 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.861087 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.892402 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.896299 4982 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.899491 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.925976 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.934518 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 07:59:58 crc kubenswrapper[4982]: I0123 07:59:58.993563 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.014592 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.113463 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.165848 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.175013 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.192213 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.218162 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.260865 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.293412 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.328324 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.403077 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.415878 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.424374 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.444924 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.670478 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.763886 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.891570 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 07:59:59 crc kubenswrapper[4982]: I0123 07:59:59.913489 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.118309 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.130406 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.185752 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.189562 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.351197 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.354461 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.433291 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.439403 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.452135 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.471647 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.618672 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.619233 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.676220 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.688528 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.705982 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.780747 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.807967 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.818561 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.820130 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.868544 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.905982 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.961569 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 08:00:00 crc kubenswrapper[4982]: I0123 08:00:00.994404 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.017179 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.026475 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.030835 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.040947 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.129958 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.157450 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.180464 4982 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.226818 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.255096 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.309617 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.328394 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.489487 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.529490 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.691197 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.796408 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.878770 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.879817 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.890862 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 08:00:01 crc kubenswrapper[4982]: I0123 08:00:01.926732 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.042952 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.133058 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.159028 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.311471 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.335704 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.377021 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.453009 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.454970 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.506646 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.542141 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.552837 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.630390 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.631764 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.693063 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.704316 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.728497 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.752757 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.803432 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.816271 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.881132 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.882538 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 08:00:02 crc kubenswrapper[4982]: I0123 08:00:02.951531 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.053772 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.071286 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.187543 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.193953 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.427072 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.449224 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.455900 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.464245 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.537776 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.545694 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.552814 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.561710 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.690151 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.770004 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.828561 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.837492 4982 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.875885 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.948125 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.976911 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 08:00:03 crc kubenswrapper[4982]: I0123 08:00:03.991057 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.367718 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.414886 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.422385 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.467097 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.509698 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.658059 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.665493 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.736075 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.744562 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.765006 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.895852 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 08:00:04 crc kubenswrapper[4982]: I0123 08:00:04.915414 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.004845 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.066219 4982 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.106130 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.378101 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.548260 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.710065 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.816335 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 08:00:05 crc kubenswrapper[4982]: I0123 08:00:05.972607 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.224287 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.340396 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.408802 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.451571 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.522125 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.594128 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.623604 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.859399 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 08:00:06 crc kubenswrapper[4982]: I0123 08:00:06.890007 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 08:00:07 crc kubenswrapper[4982]: I0123 08:00:07.024537 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 08:00:07 crc kubenswrapper[4982]: I0123 08:00:07.235893 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 08:00:07 crc kubenswrapper[4982]: I0123 08:00:07.489112 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 08:00:07 crc kubenswrapper[4982]: I0123 08:00:07.613045 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 08:00:07 crc kubenswrapper[4982]: I0123 08:00:07.742680 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 08:00:07 crc kubenswrapper[4982]: I0123 08:00:07.980531 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.087395 4982 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.090664 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rpb7j" podStartSLOduration=49.367038019 podStartE2EDuration="51.090649308s" podCreationTimestamp="2026-01-23 07:59:17 +0000 UTC" firstStartedPulling="2026-01-23 07:59:19.304089991 +0000 UTC m=+233.784453397" lastFinishedPulling="2026-01-23 07:59:21.02770127 +0000 UTC m=+235.508064686" observedRunningTime="2026-01-23 07:59:40.376180935 +0000 UTC m=+254.856544391" watchObservedRunningTime="2026-01-23 08:00:08.090649308 +0000 UTC m=+282.571012694" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.091042 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5pqbt" podStartSLOduration=48.609397067 podStartE2EDuration="50.091036348s" podCreationTimestamp="2026-01-23 07:59:18 +0000 UTC" firstStartedPulling="2026-01-23 07:59:19.31070994 +0000 UTC m=+233.791073326" lastFinishedPulling="2026-01-23 07:59:20.792349211 +0000 UTC m=+235.272712607" observedRunningTime="2026-01-23 07:59:40.286712395 +0000 UTC m=+254.767075841" watchObservedRunningTime="2026-01-23 08:00:08.091036348 +0000 UTC m=+282.571399734" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.093108 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.093165 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h"] Jan 23 08:00:08 crc kubenswrapper[4982]: E0123 08:00:08.093483 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" containerName="installer" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.093774 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" containerName="installer" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.094367 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6953a35-140d-4781-a533-4d6d61fd418d" containerName="installer" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.095285 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.099459 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.099918 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.101094 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.128974 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=28.128903301 podStartE2EDuration="28.128903301s" podCreationTimestamp="2026-01-23 07:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:00:08.123178718 +0000 UTC m=+282.603542144" watchObservedRunningTime="2026-01-23 08:00:08.128903301 +0000 UTC m=+282.609266757" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.189369 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.308943 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c459257-75e2-4837-acf8-9ada114fe1bf-config-volume\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.309133 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c459257-75e2-4837-acf8-9ada114fe1bf-secret-volume\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.309233 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzkxt\" (UniqueName: \"kubernetes.io/projected/1c459257-75e2-4837-acf8-9ada114fe1bf-kube-api-access-mzkxt\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.410212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c459257-75e2-4837-acf8-9ada114fe1bf-config-volume\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.410766 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c459257-75e2-4837-acf8-9ada114fe1bf-secret-volume\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.410825 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzkxt\" (UniqueName: \"kubernetes.io/projected/1c459257-75e2-4837-acf8-9ada114fe1bf-kube-api-access-mzkxt\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.411851 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c459257-75e2-4837-acf8-9ada114fe1bf-config-volume\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.421476 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c459257-75e2-4837-acf8-9ada114fe1bf-secret-volume\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.439391 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzkxt\" (UniqueName: \"kubernetes.io/projected/1c459257-75e2-4837-acf8-9ada114fe1bf-kube-api-access-mzkxt\") pod \"collect-profiles-29485920-9ld8h\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.545550 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.716852 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 08:00:08 crc kubenswrapper[4982]: I0123 08:00:08.729580 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:09 crc kubenswrapper[4982]: I0123 08:00:09.028883 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h"] Jan 23 08:00:09 crc kubenswrapper[4982]: I0123 08:00:09.481000 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 08:00:09 crc kubenswrapper[4982]: I0123 08:00:09.741348 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" event={"ID":"1c459257-75e2-4837-acf8-9ada114fe1bf","Type":"ContainerStarted","Data":"d459e2592f34f0c281a743aa91989e882b15f29d51c4caf1f2f6ea94be8bfea5"} Jan 23 08:00:10 crc kubenswrapper[4982]: I0123 08:00:10.752474 4982 generic.go:334] "Generic (PLEG): container finished" podID="1c459257-75e2-4837-acf8-9ada114fe1bf" containerID="56561e505019d39265e49374c4be788e197504a3e7bf9dcb69689fa9b77b44b6" exitCode=0 Jan 23 08:00:10 crc kubenswrapper[4982]: I0123 08:00:10.752557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" event={"ID":"1c459257-75e2-4837-acf8-9ada114fe1bf","Type":"ContainerDied","Data":"56561e505019d39265e49374c4be788e197504a3e7bf9dcb69689fa9b77b44b6"} Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.130319 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.264888 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c459257-75e2-4837-acf8-9ada114fe1bf-config-volume\") pod \"1c459257-75e2-4837-acf8-9ada114fe1bf\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.264973 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c459257-75e2-4837-acf8-9ada114fe1bf-secret-volume\") pod \"1c459257-75e2-4837-acf8-9ada114fe1bf\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.265079 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzkxt\" (UniqueName: \"kubernetes.io/projected/1c459257-75e2-4837-acf8-9ada114fe1bf-kube-api-access-mzkxt\") pod \"1c459257-75e2-4837-acf8-9ada114fe1bf\" (UID: \"1c459257-75e2-4837-acf8-9ada114fe1bf\") " Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.265857 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c459257-75e2-4837-acf8-9ada114fe1bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c459257-75e2-4837-acf8-9ada114fe1bf" (UID: "1c459257-75e2-4837-acf8-9ada114fe1bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.270182 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c459257-75e2-4837-acf8-9ada114fe1bf-kube-api-access-mzkxt" (OuterVolumeSpecName: "kube-api-access-mzkxt") pod "1c459257-75e2-4837-acf8-9ada114fe1bf" (UID: "1c459257-75e2-4837-acf8-9ada114fe1bf"). InnerVolumeSpecName "kube-api-access-mzkxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.270243 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c459257-75e2-4837-acf8-9ada114fe1bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c459257-75e2-4837-acf8-9ada114fe1bf" (UID: "1c459257-75e2-4837-acf8-9ada114fe1bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.366669 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c459257-75e2-4837-acf8-9ada114fe1bf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.366731 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzkxt\" (UniqueName: \"kubernetes.io/projected/1c459257-75e2-4837-acf8-9ada114fe1bf-kube-api-access-mzkxt\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.366752 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c459257-75e2-4837-acf8-9ada114fe1bf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.767993 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" event={"ID":"1c459257-75e2-4837-acf8-9ada114fe1bf","Type":"ContainerDied","Data":"d459e2592f34f0c281a743aa91989e882b15f29d51c4caf1f2f6ea94be8bfea5"} Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.768047 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d459e2592f34f0c281a743aa91989e882b15f29d51c4caf1f2f6ea94be8bfea5" Jan 23 08:00:12 crc kubenswrapper[4982]: I0123 08:00:12.768056 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h" Jan 23 08:00:14 crc kubenswrapper[4982]: I0123 08:00:14.281045 4982 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 08:00:14 crc kubenswrapper[4982]: I0123 08:00:14.282444 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://30093051b502f056151b139efaf31f0e671b1839cf18d222c2d70b25f450a61d" gracePeriod=5 Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.837553 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.838340 4982 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="30093051b502f056151b139efaf31f0e671b1839cf18d222c2d70b25f450a61d" exitCode=137 Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.838406 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35fc2ef5b4053b0ab402ad3756821a8fbfd9e3df5c100c0943f9cdb0557d9006" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.858234 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.858322 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983392 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983604 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983713 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983886 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983800 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983931 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.983963 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.984117 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.984463 4982 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.984494 4982 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.984514 4982 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:19 crc kubenswrapper[4982]: I0123 08:00:19.984531 4982 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:20 crc kubenswrapper[4982]: I0123 08:00:19.999997 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:00:20 crc kubenswrapper[4982]: I0123 08:00:20.086805 4982 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:00:20 crc kubenswrapper[4982]: I0123 08:00:20.845930 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:00:21 crc kubenswrapper[4982]: I0123 08:00:21.841156 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.597246 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rx29v"] Jan 23 08:00:23 crc kubenswrapper[4982]: E0123 08:00:23.597686 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.597705 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 08:00:23 crc kubenswrapper[4982]: E0123 08:00:23.597725 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c459257-75e2-4837-acf8-9ada114fe1bf" containerName="collect-profiles" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.597732 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c459257-75e2-4837-acf8-9ada114fe1bf" containerName="collect-profiles" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.597880 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.597902 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c459257-75e2-4837-acf8-9ada114fe1bf" containerName="collect-profiles" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.602964 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.607838 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.616412 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rx29v"] Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.746002 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44pwf\" (UniqueName: \"kubernetes.io/projected/77c4b8c7-5274-49c3-baff-769e28df27ba-kube-api-access-44pwf\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.746070 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c4b8c7-5274-49c3-baff-769e28df27ba-catalog-content\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.746099 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c4b8c7-5274-49c3-baff-769e28df27ba-utilities\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.847520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44pwf\" (UniqueName: \"kubernetes.io/projected/77c4b8c7-5274-49c3-baff-769e28df27ba-kube-api-access-44pwf\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.847885 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c4b8c7-5274-49c3-baff-769e28df27ba-catalog-content\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.847922 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c4b8c7-5274-49c3-baff-769e28df27ba-utilities\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.848584 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77c4b8c7-5274-49c3-baff-769e28df27ba-catalog-content\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.849390 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77c4b8c7-5274-49c3-baff-769e28df27ba-utilities\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.877936 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44pwf\" (UniqueName: \"kubernetes.io/projected/77c4b8c7-5274-49c3-baff-769e28df27ba-kube-api-access-44pwf\") pod \"community-operators-rx29v\" (UID: \"77c4b8c7-5274-49c3-baff-769e28df27ba\") " pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:23 crc kubenswrapper[4982]: I0123 08:00:23.935444 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:24 crc kubenswrapper[4982]: I0123 08:00:24.255757 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rx29v"] Jan 23 08:00:24 crc kubenswrapper[4982]: I0123 08:00:24.881863 4982 generic.go:334] "Generic (PLEG): container finished" podID="77c4b8c7-5274-49c3-baff-769e28df27ba" containerID="b8d342154d4185d00a809649f391e29c6a7c3c0b984af5a3b732539d39f21702" exitCode=0 Jan 23 08:00:24 crc kubenswrapper[4982]: I0123 08:00:24.881953 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx29v" event={"ID":"77c4b8c7-5274-49c3-baff-769e28df27ba","Type":"ContainerDied","Data":"b8d342154d4185d00a809649f391e29c6a7c3c0b984af5a3b732539d39f21702"} Jan 23 08:00:24 crc kubenswrapper[4982]: I0123 08:00:24.882008 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx29v" event={"ID":"77c4b8c7-5274-49c3-baff-769e28df27ba","Type":"ContainerStarted","Data":"3be430f968f01389f5b8e4faac1991953a63f171f31a50fcd381f2f5446867cf"} Jan 23 08:00:25 crc kubenswrapper[4982]: I0123 08:00:25.676121 4982 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 08:00:25 crc kubenswrapper[4982]: I0123 08:00:25.999415 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8j4j"] Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.001459 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.005444 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.010619 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8j4j"] Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.183946 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4942c45d-21ee-4378-ae42-3b5379bc5dc5-catalog-content\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.184036 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4942c45d-21ee-4378-ae42-3b5379bc5dc5-utilities\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.184089 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjfb\" (UniqueName: \"kubernetes.io/projected/4942c45d-21ee-4378-ae42-3b5379bc5dc5-kube-api-access-csjfb\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.286489 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4942c45d-21ee-4378-ae42-3b5379bc5dc5-catalog-content\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.286590 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4942c45d-21ee-4378-ae42-3b5379bc5dc5-utilities\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.286696 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csjfb\" (UniqueName: \"kubernetes.io/projected/4942c45d-21ee-4378-ae42-3b5379bc5dc5-kube-api-access-csjfb\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.287176 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4942c45d-21ee-4378-ae42-3b5379bc5dc5-catalog-content\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.287318 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4942c45d-21ee-4378-ae42-3b5379bc5dc5-utilities\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.313157 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csjfb\" (UniqueName: \"kubernetes.io/projected/4942c45d-21ee-4378-ae42-3b5379bc5dc5-kube-api-access-csjfb\") pod \"redhat-operators-z8j4j\" (UID: \"4942c45d-21ee-4378-ae42-3b5379bc5dc5\") " pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.338887 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.815593 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8j4j"] Jan 23 08:00:26 crc kubenswrapper[4982]: W0123 08:00:26.828768 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4942c45d_21ee_4378_ae42_3b5379bc5dc5.slice/crio-c4fb255217a1d5ef3f34a2b8a2ae23bda567b2bc878d0d066bae2fbcb3eb41bd WatchSource:0}: Error finding container c4fb255217a1d5ef3f34a2b8a2ae23bda567b2bc878d0d066bae2fbcb3eb41bd: Status 404 returned error can't find the container with id c4fb255217a1d5ef3f34a2b8a2ae23bda567b2bc878d0d066bae2fbcb3eb41bd Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.896818 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8j4j" event={"ID":"4942c45d-21ee-4378-ae42-3b5379bc5dc5","Type":"ContainerStarted","Data":"c4fb255217a1d5ef3f34a2b8a2ae23bda567b2bc878d0d066bae2fbcb3eb41bd"} Jan 23 08:00:26 crc kubenswrapper[4982]: I0123 08:00:26.904003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx29v" event={"ID":"77c4b8c7-5274-49c3-baff-769e28df27ba","Type":"ContainerStarted","Data":"acce83e417c2418f2cdfb4cb593f652f3d39ccd54ab9ebee18b686011b4430d0"} Jan 23 08:00:27 crc kubenswrapper[4982]: I0123 08:00:27.913692 4982 generic.go:334] "Generic (PLEG): container finished" podID="4942c45d-21ee-4378-ae42-3b5379bc5dc5" containerID="ab917cea7959dfa87c14183e2a8e7e5f3df121dc1072d3f8ca747503924985b7" exitCode=0 Jan 23 08:00:27 crc kubenswrapper[4982]: I0123 08:00:27.913898 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8j4j" event={"ID":"4942c45d-21ee-4378-ae42-3b5379bc5dc5","Type":"ContainerDied","Data":"ab917cea7959dfa87c14183e2a8e7e5f3df121dc1072d3f8ca747503924985b7"} Jan 23 08:00:27 crc kubenswrapper[4982]: I0123 08:00:27.917740 4982 generic.go:334] "Generic (PLEG): container finished" podID="77c4b8c7-5274-49c3-baff-769e28df27ba" containerID="acce83e417c2418f2cdfb4cb593f652f3d39ccd54ab9ebee18b686011b4430d0" exitCode=0 Jan 23 08:00:27 crc kubenswrapper[4982]: I0123 08:00:27.917787 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx29v" event={"ID":"77c4b8c7-5274-49c3-baff-769e28df27ba","Type":"ContainerDied","Data":"acce83e417c2418f2cdfb4cb593f652f3d39ccd54ab9ebee18b686011b4430d0"} Jan 23 08:00:27 crc kubenswrapper[4982]: I0123 08:00:27.917816 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rx29v" event={"ID":"77c4b8c7-5274-49c3-baff-769e28df27ba","Type":"ContainerStarted","Data":"deb43bde6639d201d1c4ae5e0c2f234740ad6ad13f1283f723c13ebeface55cc"} Jan 23 08:00:27 crc kubenswrapper[4982]: I0123 08:00:27.969430 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rx29v" podStartSLOduration=2.2724325739999998 podStartE2EDuration="4.969410088s" podCreationTimestamp="2026-01-23 08:00:23 +0000 UTC" firstStartedPulling="2026-01-23 08:00:24.883973148 +0000 UTC m=+299.364336534" lastFinishedPulling="2026-01-23 08:00:27.580950662 +0000 UTC m=+302.061314048" observedRunningTime="2026-01-23 08:00:27.967298912 +0000 UTC m=+302.447662298" watchObservedRunningTime="2026-01-23 08:00:27.969410088 +0000 UTC m=+302.449773474" Jan 23 08:00:28 crc kubenswrapper[4982]: I0123 08:00:28.929313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8j4j" event={"ID":"4942c45d-21ee-4378-ae42-3b5379bc5dc5","Type":"ContainerStarted","Data":"f0bcd2152b0a8420696bce26e7acacab365137fec3ff0e1a5e518a0f245bfd6c"} Jan 23 08:00:29 crc kubenswrapper[4982]: I0123 08:00:29.939117 4982 generic.go:334] "Generic (PLEG): container finished" podID="4942c45d-21ee-4378-ae42-3b5379bc5dc5" containerID="f0bcd2152b0a8420696bce26e7acacab365137fec3ff0e1a5e518a0f245bfd6c" exitCode=0 Jan 23 08:00:29 crc kubenswrapper[4982]: I0123 08:00:29.939248 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8j4j" event={"ID":"4942c45d-21ee-4378-ae42-3b5379bc5dc5","Type":"ContainerDied","Data":"f0bcd2152b0a8420696bce26e7acacab365137fec3ff0e1a5e518a0f245bfd6c"} Jan 23 08:00:30 crc kubenswrapper[4982]: I0123 08:00:30.953338 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8j4j" event={"ID":"4942c45d-21ee-4378-ae42-3b5379bc5dc5","Type":"ContainerStarted","Data":"32f3a6518e857e9c43a57b2863988e41968b17d4637812498b66c06e96519399"} Jan 23 08:00:30 crc kubenswrapper[4982]: I0123 08:00:30.993989 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8j4j" podStartSLOduration=3.517382598 podStartE2EDuration="5.993958581s" podCreationTimestamp="2026-01-23 08:00:25 +0000 UTC" firstStartedPulling="2026-01-23 08:00:27.915713663 +0000 UTC m=+302.396077089" lastFinishedPulling="2026-01-23 08:00:30.392289686 +0000 UTC m=+304.872653072" observedRunningTime="2026-01-23 08:00:30.989024449 +0000 UTC m=+305.469387845" watchObservedRunningTime="2026-01-23 08:00:30.993958581 +0000 UTC m=+305.474321997" Jan 23 08:00:32 crc kubenswrapper[4982]: I0123 08:00:32.993050 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6fq6t"] Jan 23 08:00:32 crc kubenswrapper[4982]: I0123 08:00:32.994667 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.008576 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fq6t"] Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.101562 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smb64\" (UniqueName: \"kubernetes.io/projected/81c7e93d-c277-4f09-ba56-4603df959c0d-kube-api-access-smb64\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.101759 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c7e93d-c277-4f09-ba56-4603df959c0d-utilities\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.101890 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c7e93d-c277-4f09-ba56-4603df959c0d-catalog-content\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.203490 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c7e93d-c277-4f09-ba56-4603df959c0d-catalog-content\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.203646 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smb64\" (UniqueName: \"kubernetes.io/projected/81c7e93d-c277-4f09-ba56-4603df959c0d-kube-api-access-smb64\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.203697 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c7e93d-c277-4f09-ba56-4603df959c0d-utilities\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.204264 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c7e93d-c277-4f09-ba56-4603df959c0d-utilities\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.204264 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c7e93d-c277-4f09-ba56-4603df959c0d-catalog-content\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.229003 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smb64\" (UniqueName: \"kubernetes.io/projected/81c7e93d-c277-4f09-ba56-4603df959c0d-kube-api-access-smb64\") pod \"community-operators-6fq6t\" (UID: \"81c7e93d-c277-4f09-ba56-4603df959c0d\") " pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.315430 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.575709 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fq6t"] Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.936496 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.937422 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.979998 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fq6t" event={"ID":"81c7e93d-c277-4f09-ba56-4603df959c0d","Type":"ContainerStarted","Data":"f88d6c8b66adc4ed05db21df71ed5bb0f66102b882bd7020a64d186bd0fa82be"} Jan 23 08:00:33 crc kubenswrapper[4982]: I0123 08:00:33.998680 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:34 crc kubenswrapper[4982]: I0123 08:00:34.989388 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fq6t" event={"ID":"81c7e93d-c277-4f09-ba56-4603df959c0d","Type":"ContainerStarted","Data":"e4035198d1f54f57c2760a77f0d13ea9122b4a133c24f8efd27bff17a26cf0b0"} Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.035441 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rx29v" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.599495 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w9mg8"] Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.640917 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.643114 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9mg8"] Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.744744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b332ef-a088-447d-a6ad-5fa2dee1b475-utilities\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.745246 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbs7k\" (UniqueName: \"kubernetes.io/projected/78b332ef-a088-447d-a6ad-5fa2dee1b475-kube-api-access-nbs7k\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.745344 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b332ef-a088-447d-a6ad-5fa2dee1b475-catalog-content\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.846696 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b332ef-a088-447d-a6ad-5fa2dee1b475-utilities\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.846757 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbs7k\" (UniqueName: \"kubernetes.io/projected/78b332ef-a088-447d-a6ad-5fa2dee1b475-kube-api-access-nbs7k\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.846798 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b332ef-a088-447d-a6ad-5fa2dee1b475-catalog-content\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.847406 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b332ef-a088-447d-a6ad-5fa2dee1b475-catalog-content\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.847493 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b332ef-a088-447d-a6ad-5fa2dee1b475-utilities\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.874759 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbs7k\" (UniqueName: \"kubernetes.io/projected/78b332ef-a088-447d-a6ad-5fa2dee1b475-kube-api-access-nbs7k\") pod \"redhat-operators-w9mg8\" (UID: \"78b332ef-a088-447d-a6ad-5fa2dee1b475\") " pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:35 crc kubenswrapper[4982]: I0123 08:00:35.963118 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:36 crc kubenswrapper[4982]: I0123 08:00:36.339664 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:36 crc kubenswrapper[4982]: I0123 08:00:36.340170 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:36 crc kubenswrapper[4982]: I0123 08:00:36.470371 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w9mg8"] Jan 23 08:00:36 crc kubenswrapper[4982]: W0123 08:00:36.492146 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78b332ef_a088_447d_a6ad_5fa2dee1b475.slice/crio-f0e1a211538d9292abfc3f8006adff4e37aeeb66f8772dca20a370003db78afc WatchSource:0}: Error finding container f0e1a211538d9292abfc3f8006adff4e37aeeb66f8772dca20a370003db78afc: Status 404 returned error can't find the container with id f0e1a211538d9292abfc3f8006adff4e37aeeb66f8772dca20a370003db78afc Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.005230 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9mg8" event={"ID":"78b332ef-a088-447d-a6ad-5fa2dee1b475","Type":"ContainerStarted","Data":"f0e1a211538d9292abfc3f8006adff4e37aeeb66f8772dca20a370003db78afc"} Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.007719 4982 generic.go:334] "Generic (PLEG): container finished" podID="81c7e93d-c277-4f09-ba56-4603df959c0d" containerID="e4035198d1f54f57c2760a77f0d13ea9122b4a133c24f8efd27bff17a26cf0b0" exitCode=0 Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.007780 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fq6t" event={"ID":"81c7e93d-c277-4f09-ba56-4603df959c0d","Type":"ContainerDied","Data":"e4035198d1f54f57c2760a77f0d13ea9122b4a133c24f8efd27bff17a26cf0b0"} Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.389807 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8j4j" podUID="4942c45d-21ee-4378-ae42-3b5379bc5dc5" containerName="registry-server" probeResult="failure" output=< Jan 23 08:00:37 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:00:37 crc kubenswrapper[4982]: > Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.592216 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mjsh6"] Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.594419 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.610910 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjsh6"] Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.679238 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67eaccbd-4800-4599-9778-0b1fd93d1c28-catalog-content\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.679604 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67eaccbd-4800-4599-9778-0b1fd93d1c28-utilities\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.679815 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tpx2\" (UniqueName: \"kubernetes.io/projected/67eaccbd-4800-4599-9778-0b1fd93d1c28-kube-api-access-8tpx2\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.781676 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67eaccbd-4800-4599-9778-0b1fd93d1c28-utilities\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.781755 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tpx2\" (UniqueName: \"kubernetes.io/projected/67eaccbd-4800-4599-9778-0b1fd93d1c28-kube-api-access-8tpx2\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.781835 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67eaccbd-4800-4599-9778-0b1fd93d1c28-catalog-content\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.782317 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67eaccbd-4800-4599-9778-0b1fd93d1c28-utilities\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.782458 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67eaccbd-4800-4599-9778-0b1fd93d1c28-catalog-content\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.819079 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tpx2\" (UniqueName: \"kubernetes.io/projected/67eaccbd-4800-4599-9778-0b1fd93d1c28-kube-api-access-8tpx2\") pod \"community-operators-mjsh6\" (UID: \"67eaccbd-4800-4599-9778-0b1fd93d1c28\") " pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:37 crc kubenswrapper[4982]: I0123 08:00:37.931744 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:38 crc kubenswrapper[4982]: I0123 08:00:38.175254 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mjsh6"] Jan 23 08:00:38 crc kubenswrapper[4982]: W0123 08:00:38.195791 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67eaccbd_4800_4599_9778_0b1fd93d1c28.slice/crio-2d75bf73e555be2831c297d837de88c30ee018b0ea2875f00184637617dea525 WatchSource:0}: Error finding container 2d75bf73e555be2831c297d837de88c30ee018b0ea2875f00184637617dea525: Status 404 returned error can't find the container with id 2d75bf73e555be2831c297d837de88c30ee018b0ea2875f00184637617dea525 Jan 23 08:00:39 crc kubenswrapper[4982]: I0123 08:00:39.030520 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjsh6" event={"ID":"67eaccbd-4800-4599-9778-0b1fd93d1c28","Type":"ContainerStarted","Data":"2d75bf73e555be2831c297d837de88c30ee018b0ea2875f00184637617dea525"} Jan 23 08:00:39 crc kubenswrapper[4982]: I0123 08:00:39.033977 4982 generic.go:334] "Generic (PLEG): container finished" podID="78b332ef-a088-447d-a6ad-5fa2dee1b475" containerID="7598da412d54fe028005fe5b5bd00e1d44ad415e391c2c454c0d385efd414826" exitCode=0 Jan 23 08:00:39 crc kubenswrapper[4982]: I0123 08:00:39.034059 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9mg8" event={"ID":"78b332ef-a088-447d-a6ad-5fa2dee1b475","Type":"ContainerDied","Data":"7598da412d54fe028005fe5b5bd00e1d44ad415e391c2c454c0d385efd414826"} Jan 23 08:00:39 crc kubenswrapper[4982]: I0123 08:00:39.989103 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f4hbf"] Jan 23 08:00:39 crc kubenswrapper[4982]: I0123 08:00:39.991406 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.008520 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4hbf"] Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.042471 4982 generic.go:334] "Generic (PLEG): container finished" podID="81c7e93d-c277-4f09-ba56-4603df959c0d" containerID="df8f0a70e52fe8cfafce8a4c495a42d64fd4babcc137454da509bda7a9ee3621" exitCode=0 Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.042582 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fq6t" event={"ID":"81c7e93d-c277-4f09-ba56-4603df959c0d","Type":"ContainerDied","Data":"df8f0a70e52fe8cfafce8a4c495a42d64fd4babcc137454da509bda7a9ee3621"} Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.046122 4982 generic.go:334] "Generic (PLEG): container finished" podID="67eaccbd-4800-4599-9778-0b1fd93d1c28" containerID="cbd7b963d7415ab20a7e86c398116081df86f2023e1e5d93920b46ccdcfee08a" exitCode=0 Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.046292 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjsh6" event={"ID":"67eaccbd-4800-4599-9778-0b1fd93d1c28","Type":"ContainerDied","Data":"cbd7b963d7415ab20a7e86c398116081df86f2023e1e5d93920b46ccdcfee08a"} Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.120730 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4318f72-acf3-465f-b420-daaa8efd5061-catalog-content\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.120828 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wkhg\" (UniqueName: \"kubernetes.io/projected/b4318f72-acf3-465f-b420-daaa8efd5061-kube-api-access-8wkhg\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.120969 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4318f72-acf3-465f-b420-daaa8efd5061-utilities\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.222545 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wkhg\" (UniqueName: \"kubernetes.io/projected/b4318f72-acf3-465f-b420-daaa8efd5061-kube-api-access-8wkhg\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.222698 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4318f72-acf3-465f-b420-daaa8efd5061-utilities\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.222767 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4318f72-acf3-465f-b420-daaa8efd5061-catalog-content\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.223419 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4318f72-acf3-465f-b420-daaa8efd5061-catalog-content\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.223480 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4318f72-acf3-465f-b420-daaa8efd5061-utilities\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.249729 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wkhg\" (UniqueName: \"kubernetes.io/projected/b4318f72-acf3-465f-b420-daaa8efd5061-kube-api-access-8wkhg\") pod \"redhat-operators-f4hbf\" (UID: \"b4318f72-acf3-465f-b420-daaa8efd5061\") " pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.310950 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:00:40 crc kubenswrapper[4982]: I0123 08:00:40.597535 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f4hbf"] Jan 23 08:00:41 crc kubenswrapper[4982]: I0123 08:00:41.057452 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4318f72-acf3-465f-b420-daaa8efd5061" containerID="6c3a2918afc574bc54d3f1455e65f41b471156b93d5ab4ae71219d27ab6a4fa7" exitCode=0 Jan 23 08:00:41 crc kubenswrapper[4982]: I0123 08:00:41.057521 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4hbf" event={"ID":"b4318f72-acf3-465f-b420-daaa8efd5061","Type":"ContainerDied","Data":"6c3a2918afc574bc54d3f1455e65f41b471156b93d5ab4ae71219d27ab6a4fa7"} Jan 23 08:00:41 crc kubenswrapper[4982]: I0123 08:00:41.058016 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4hbf" event={"ID":"b4318f72-acf3-465f-b420-daaa8efd5061","Type":"ContainerStarted","Data":"7be28d1a132775faaf4f235145faca59181d9abbc7d72fb186b13df69026b36d"} Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.067949 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4hbf" event={"ID":"b4318f72-acf3-465f-b420-daaa8efd5061","Type":"ContainerStarted","Data":"8d7d27196324101a17fed2013aa7cdcfe9d6a639e4e6158fffbd67fa65204c7b"} Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.069397 4982 generic.go:334] "Generic (PLEG): container finished" podID="78b332ef-a088-447d-a6ad-5fa2dee1b475" containerID="357a2a17836a8d7468a1bc35025d7faf74273483714ccd9c7518311b298eaabe" exitCode=0 Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.069444 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9mg8" event={"ID":"78b332ef-a088-447d-a6ad-5fa2dee1b475","Type":"ContainerDied","Data":"357a2a17836a8d7468a1bc35025d7faf74273483714ccd9c7518311b298eaabe"} Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.072249 4982 generic.go:334] "Generic (PLEG): container finished" podID="67eaccbd-4800-4599-9778-0b1fd93d1c28" containerID="ab97061ec5a986d9d6b90cef6252f18069a4be18fce136845d782e2ef17d7c78" exitCode=0 Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.072329 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjsh6" event={"ID":"67eaccbd-4800-4599-9778-0b1fd93d1c28","Type":"ContainerDied","Data":"ab97061ec5a986d9d6b90cef6252f18069a4be18fce136845d782e2ef17d7c78"} Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.080685 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fq6t" event={"ID":"81c7e93d-c277-4f09-ba56-4603df959c0d","Type":"ContainerStarted","Data":"6976b27767b47faad63360614efdd8582503ebfaebca280b9915e16615e66fc0"} Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.151012 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6fq6t" podStartSLOduration=6.278492347 podStartE2EDuration="10.150989871s" podCreationTimestamp="2026-01-23 08:00:32 +0000 UTC" firstStartedPulling="2026-01-23 08:00:37.009730226 +0000 UTC m=+311.490093612" lastFinishedPulling="2026-01-23 08:00:40.88222774 +0000 UTC m=+315.362591136" observedRunningTime="2026-01-23 08:00:42.14459972 +0000 UTC m=+316.624963106" watchObservedRunningTime="2026-01-23 08:00:42.150989871 +0000 UTC m=+316.631353267" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.392667 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fxtrd"] Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.394158 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.412137 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxtrd"] Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.463404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b862fe-c362-4b57-a793-0ef76d91ae90-utilities\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.463477 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vskbz\" (UniqueName: \"kubernetes.io/projected/79b862fe-c362-4b57-a793-0ef76d91ae90-kube-api-access-vskbz\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.463531 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b862fe-c362-4b57-a793-0ef76d91ae90-catalog-content\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.564876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b862fe-c362-4b57-a793-0ef76d91ae90-utilities\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.564956 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vskbz\" (UniqueName: \"kubernetes.io/projected/79b862fe-c362-4b57-a793-0ef76d91ae90-kube-api-access-vskbz\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.565048 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b862fe-c362-4b57-a793-0ef76d91ae90-catalog-content\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.565580 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79b862fe-c362-4b57-a793-0ef76d91ae90-utilities\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.565832 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79b862fe-c362-4b57-a793-0ef76d91ae90-catalog-content\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.607537 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vskbz\" (UniqueName: \"kubernetes.io/projected/79b862fe-c362-4b57-a793-0ef76d91ae90-kube-api-access-vskbz\") pod \"community-operators-fxtrd\" (UID: \"79b862fe-c362-4b57-a793-0ef76d91ae90\") " pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.710065 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:00:42 crc kubenswrapper[4982]: I0123 08:00:42.967149 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fxtrd"] Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.091786 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4318f72-acf3-465f-b420-daaa8efd5061" containerID="8d7d27196324101a17fed2013aa7cdcfe9d6a639e4e6158fffbd67fa65204c7b" exitCode=0 Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.091875 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4hbf" event={"ID":"b4318f72-acf3-465f-b420-daaa8efd5061","Type":"ContainerDied","Data":"8d7d27196324101a17fed2013aa7cdcfe9d6a639e4e6158fffbd67fa65204c7b"} Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.098587 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mjsh6" event={"ID":"67eaccbd-4800-4599-9778-0b1fd93d1c28","Type":"ContainerStarted","Data":"7daaceaed769d451daad513b646d59531566bb23ad646e9096bcfde0565ae016"} Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.105826 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtrd" event={"ID":"79b862fe-c362-4b57-a793-0ef76d91ae90","Type":"ContainerStarted","Data":"1191ecf92c83077c9c225c350681c60a7a8c20694cf8fd9293cd6de79871906f"} Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.134367 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mjsh6" podStartSLOduration=3.588079495 podStartE2EDuration="6.134348152s" podCreationTimestamp="2026-01-23 08:00:37 +0000 UTC" firstStartedPulling="2026-01-23 08:00:40.049683141 +0000 UTC m=+314.530046527" lastFinishedPulling="2026-01-23 08:00:42.595951798 +0000 UTC m=+317.076315184" observedRunningTime="2026-01-23 08:00:43.129999586 +0000 UTC m=+317.610362972" watchObservedRunningTime="2026-01-23 08:00:43.134348152 +0000 UTC m=+317.614711538" Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.316483 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.316561 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:43 crc kubenswrapper[4982]: I0123 08:00:43.408499 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:00:44 crc kubenswrapper[4982]: I0123 08:00:44.113712 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f4hbf" event={"ID":"b4318f72-acf3-465f-b420-daaa8efd5061","Type":"ContainerStarted","Data":"06e75932b94f625bc0f3099a4ab2d5f73a5a9d4830a91f5224c04900430ea3b5"} Jan 23 08:00:44 crc kubenswrapper[4982]: I0123 08:00:44.116603 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w9mg8" event={"ID":"78b332ef-a088-447d-a6ad-5fa2dee1b475","Type":"ContainerStarted","Data":"29c06bf3fbf2bc070a72a125ea25c492cd91ba9a39a062bfd779175cceac4969"} Jan 23 08:00:44 crc kubenswrapper[4982]: I0123 08:00:44.119318 4982 generic.go:334] "Generic (PLEG): container finished" podID="79b862fe-c362-4b57-a793-0ef76d91ae90" containerID="9e1c71cd9095d3e9f0490aedeeeb3334e0febe7f30abf9a749bf6839e7d7e349" exitCode=0 Jan 23 08:00:44 crc kubenswrapper[4982]: I0123 08:00:44.119364 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtrd" event={"ID":"79b862fe-c362-4b57-a793-0ef76d91ae90","Type":"ContainerDied","Data":"9e1c71cd9095d3e9f0490aedeeeb3334e0febe7f30abf9a749bf6839e7d7e349"} Jan 23 08:00:44 crc kubenswrapper[4982]: I0123 08:00:44.165318 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f4hbf" podStartSLOduration=2.575504516 podStartE2EDuration="5.165295715s" podCreationTimestamp="2026-01-23 08:00:39 +0000 UTC" firstStartedPulling="2026-01-23 08:00:41.060943769 +0000 UTC m=+315.541307195" lastFinishedPulling="2026-01-23 08:00:43.650735008 +0000 UTC m=+318.131098394" observedRunningTime="2026-01-23 08:00:44.142541947 +0000 UTC m=+318.622905343" watchObservedRunningTime="2026-01-23 08:00:44.165295715 +0000 UTC m=+318.645659101" Jan 23 08:00:44 crc kubenswrapper[4982]: I0123 08:00:44.183876 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w9mg8" podStartSLOduration=6.183362541 podStartE2EDuration="9.183844621s" podCreationTimestamp="2026-01-23 08:00:35 +0000 UTC" firstStartedPulling="2026-01-23 08:00:40.049912857 +0000 UTC m=+314.530276233" lastFinishedPulling="2026-01-23 08:00:43.050394927 +0000 UTC m=+317.530758313" observedRunningTime="2026-01-23 08:00:44.179644679 +0000 UTC m=+318.660008085" watchObservedRunningTime="2026-01-23 08:00:44.183844621 +0000 UTC m=+318.664208017" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.193161 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vpdt"] Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.195148 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.211907 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vpdt"] Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.307057 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5444979-363c-4b8d-b800-4a9e31d78f3c-utilities\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.307157 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p554\" (UniqueName: \"kubernetes.io/projected/c5444979-363c-4b8d-b800-4a9e31d78f3c-kube-api-access-6p554\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.307208 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5444979-363c-4b8d-b800-4a9e31d78f3c-catalog-content\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.408922 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5444979-363c-4b8d-b800-4a9e31d78f3c-utilities\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.409030 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p554\" (UniqueName: \"kubernetes.io/projected/c5444979-363c-4b8d-b800-4a9e31d78f3c-kube-api-access-6p554\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.409073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5444979-363c-4b8d-b800-4a9e31d78f3c-catalog-content\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.409660 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5444979-363c-4b8d-b800-4a9e31d78f3c-catalog-content\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.409913 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5444979-363c-4b8d-b800-4a9e31d78f3c-utilities\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.447587 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p554\" (UniqueName: \"kubernetes.io/projected/c5444979-363c-4b8d-b800-4a9e31d78f3c-kube-api-access-6p554\") pod \"redhat-operators-2vpdt\" (UID: \"c5444979-363c-4b8d-b800-4a9e31d78f3c\") " pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.510858 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.963353 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.964104 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:00:45 crc kubenswrapper[4982]: I0123 08:00:45.967493 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vpdt"] Jan 23 08:00:45 crc kubenswrapper[4982]: W0123 08:00:45.979469 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5444979_363c_4b8d_b800_4a9e31d78f3c.slice/crio-b91d98d6128c5a06c97bf054c66bac862b46f530f973d6cbefb14d7700e10373 WatchSource:0}: Error finding container b91d98d6128c5a06c97bf054c66bac862b46f530f973d6cbefb14d7700e10373: Status 404 returned error can't find the container with id b91d98d6128c5a06c97bf054c66bac862b46f530f973d6cbefb14d7700e10373 Jan 23 08:00:46 crc kubenswrapper[4982]: I0123 08:00:46.137313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vpdt" event={"ID":"c5444979-363c-4b8d-b800-4a9e31d78f3c","Type":"ContainerStarted","Data":"b91d98d6128c5a06c97bf054c66bac862b46f530f973d6cbefb14d7700e10373"} Jan 23 08:00:46 crc kubenswrapper[4982]: I0123 08:00:46.397591 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:46 crc kubenswrapper[4982]: I0123 08:00:46.448229 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8j4j" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.013675 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w9mg8" podUID="78b332ef-a088-447d-a6ad-5fa2dee1b475" containerName="registry-server" probeResult="failure" output=< Jan 23 08:00:47 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:00:47 crc kubenswrapper[4982]: > Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.153667 4982 generic.go:334] "Generic (PLEG): container finished" podID="c5444979-363c-4b8d-b800-4a9e31d78f3c" containerID="eb5ec83824e2f3022004bbab89302c01d3dd2393b180185769e8292a178c3698" exitCode=0 Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.153734 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vpdt" event={"ID":"c5444979-363c-4b8d-b800-4a9e31d78f3c","Type":"ContainerDied","Data":"eb5ec83824e2f3022004bbab89302c01d3dd2393b180185769e8292a178c3698"} Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.158599 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtrd" event={"ID":"79b862fe-c362-4b57-a793-0ef76d91ae90","Type":"ContainerStarted","Data":"a7800fb2e06f7dd6354859e63b169ad8e891fb1d8374914e0c4b6ca155244341"} Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.387353 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xmwvp"] Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.388544 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.410482 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xmwvp"] Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.441571 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82q89\" (UniqueName: \"kubernetes.io/projected/3e8b4111-8592-42cd-948c-795aff30524f-kube-api-access-82q89\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.441747 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8b4111-8592-42cd-948c-795aff30524f-catalog-content\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.441808 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8b4111-8592-42cd-948c-795aff30524f-utilities\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.543758 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8b4111-8592-42cd-948c-795aff30524f-catalog-content\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.543871 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8b4111-8592-42cd-948c-795aff30524f-utilities\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.544044 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82q89\" (UniqueName: \"kubernetes.io/projected/3e8b4111-8592-42cd-948c-795aff30524f-kube-api-access-82q89\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.545059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e8b4111-8592-42cd-948c-795aff30524f-catalog-content\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.545291 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e8b4111-8592-42cd-948c-795aff30524f-utilities\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.566737 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82q89\" (UniqueName: \"kubernetes.io/projected/3e8b4111-8592-42cd-948c-795aff30524f-kube-api-access-82q89\") pod \"community-operators-xmwvp\" (UID: \"3e8b4111-8592-42cd-948c-795aff30524f\") " pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.590225 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nqp77"] Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.591507 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.614489 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqp77"] Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.646202 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05772d57-547c-4312-9662-be516b1eff34-utilities\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.646285 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc29h\" (UniqueName: \"kubernetes.io/projected/05772d57-547c-4312-9662-be516b1eff34-kube-api-access-fc29h\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.646345 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05772d57-547c-4312-9662-be516b1eff34-catalog-content\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.705226 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.748551 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05772d57-547c-4312-9662-be516b1eff34-utilities\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.748647 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc29h\" (UniqueName: \"kubernetes.io/projected/05772d57-547c-4312-9662-be516b1eff34-kube-api-access-fc29h\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.748719 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05772d57-547c-4312-9662-be516b1eff34-catalog-content\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.749275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05772d57-547c-4312-9662-be516b1eff34-utilities\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.749326 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05772d57-547c-4312-9662-be516b1eff34-catalog-content\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.775559 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc29h\" (UniqueName: \"kubernetes.io/projected/05772d57-547c-4312-9662-be516b1eff34-kube-api-access-fc29h\") pod \"redhat-operators-nqp77\" (UID: \"05772d57-547c-4312-9662-be516b1eff34\") " pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.902325 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8xw2q"] Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.909694 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.917229 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.936960 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.937545 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.956849 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-registry-tls\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.956919 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1b32be9a-afba-4a2c-b758-09f026b04f4d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.956958 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r699j\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-kube-api-access-r699j\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.956978 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1b32be9a-afba-4a2c-b758-09f026b04f4d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.957002 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-bound-sa-token\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.957183 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.957718 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1b32be9a-afba-4a2c-b758-09f026b04f4d-registry-certificates\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.957797 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b32be9a-afba-4a2c-b758-09f026b04f4d-trusted-ca\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:47 crc kubenswrapper[4982]: I0123 08:00:47.957908 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8xw2q"] Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.059801 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-registry-tls\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.059860 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1b32be9a-afba-4a2c-b758-09f026b04f4d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.059887 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r699j\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-kube-api-access-r699j\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.059908 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-bound-sa-token\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.059925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1b32be9a-afba-4a2c-b758-09f026b04f4d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.059978 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1b32be9a-afba-4a2c-b758-09f026b04f4d-registry-certificates\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.060002 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b32be9a-afba-4a2c-b758-09f026b04f4d-trusted-ca\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.062418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1b32be9a-afba-4a2c-b758-09f026b04f4d-trusted-ca\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.062975 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1b32be9a-afba-4a2c-b758-09f026b04f4d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.064045 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1b32be9a-afba-4a2c-b758-09f026b04f4d-registry-certificates\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.068274 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.068686 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1b32be9a-afba-4a2c-b758-09f026b04f4d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.068735 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-registry-tls\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.079219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r699j\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-kube-api-access-r699j\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.079873 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.099219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1b32be9a-afba-4a2c-b758-09f026b04f4d-bound-sa-token\") pod \"image-registry-66df7c8f76-8xw2q\" (UID: \"1b32be9a-afba-4a2c-b758-09f026b04f4d\") " pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.149191 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xmwvp"] Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.172111 4982 generic.go:334] "Generic (PLEG): container finished" podID="79b862fe-c362-4b57-a793-0ef76d91ae90" containerID="a7800fb2e06f7dd6354859e63b169ad8e891fb1d8374914e0c4b6ca155244341" exitCode=0 Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.172239 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtrd" event={"ID":"79b862fe-c362-4b57-a793-0ef76d91ae90","Type":"ContainerDied","Data":"a7800fb2e06f7dd6354859e63b169ad8e891fb1d8374914e0c4b6ca155244341"} Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.174609 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmwvp" event={"ID":"3e8b4111-8592-42cd-948c-795aff30524f","Type":"ContainerStarted","Data":"a0721029baace98b26784be7f9549d3eaadd44fbc94eca32a954b1de6d161a9f"} Jan 23 08:00:48 crc kubenswrapper[4982]: I0123 08:00:48.259410 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mjsh6" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:48.292194 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:48.506442 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqp77"] Jan 23 08:01:01 crc kubenswrapper[4982]: W0123 08:00:48.518008 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05772d57_547c_4312_9662_be516b1eff34.slice/crio-53226e243973a464ce840beb25c28ad2012d96723e290b8298be6e4b1a08febe WatchSource:0}: Error finding container 53226e243973a464ce840beb25c28ad2012d96723e290b8298be6e4b1a08febe: Status 404 returned error can't find the container with id 53226e243973a464ce840beb25c28ad2012d96723e290b8298be6e4b1a08febe Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.181314 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqp77" event={"ID":"05772d57-547c-4312-9662-be516b1eff34","Type":"ContainerStarted","Data":"53226e243973a464ce840beb25c28ad2012d96723e290b8298be6e4b1a08febe"} Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.811175 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vfcbn"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.813036 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.837327 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfcbn"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.890239 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx282\" (UniqueName: \"kubernetes.io/projected/13e99bc1-f91c-410e-a0bb-64924929a071-kube-api-access-jx282\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.890330 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e99bc1-f91c-410e-a0bb-64924929a071-catalog-content\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.890391 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e99bc1-f91c-410e-a0bb-64924929a071-utilities\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.993805 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx282\" (UniqueName: \"kubernetes.io/projected/13e99bc1-f91c-410e-a0bb-64924929a071-kube-api-access-jx282\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.993900 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e99bc1-f91c-410e-a0bb-64924929a071-catalog-content\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.993944 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e99bc1-f91c-410e-a0bb-64924929a071-utilities\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.994530 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13e99bc1-f91c-410e-a0bb-64924929a071-utilities\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:49.994729 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13e99bc1-f91c-410e-a0bb-64924929a071-catalog-content\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.006065 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xfshq"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.007805 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.026000 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfshq"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.027977 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx282\" (UniqueName: \"kubernetes.io/projected/13e99bc1-f91c-410e-a0bb-64924929a071-kube-api-access-jx282\") pod \"community-operators-vfcbn\" (UID: \"13e99bc1-f91c-410e-a0bb-64924929a071\") " pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.096016 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcprh\" (UniqueName: \"kubernetes.io/projected/84bf76b1-be1c-4809-9c81-59fea69e437f-kube-api-access-pcprh\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.096161 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84bf76b1-be1c-4809-9c81-59fea69e437f-catalog-content\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.096231 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84bf76b1-be1c-4809-9c81-59fea69e437f-utilities\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.133101 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.197585 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84bf76b1-be1c-4809-9c81-59fea69e437f-catalog-content\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.197681 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84bf76b1-be1c-4809-9c81-59fea69e437f-utilities\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.197849 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcprh\" (UniqueName: \"kubernetes.io/projected/84bf76b1-be1c-4809-9c81-59fea69e437f-kube-api-access-pcprh\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.198016 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqp77" event={"ID":"05772d57-547c-4312-9662-be516b1eff34","Type":"ContainerStarted","Data":"798cc60cc3a9957d0a98ce656efcb17540e2f76a9abdb7a33be0fc2ac556693c"} Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.198330 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84bf76b1-be1c-4809-9c81-59fea69e437f-catalog-content\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.198765 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84bf76b1-be1c-4809-9c81-59fea69e437f-utilities\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.202590 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmwvp" event={"ID":"3e8b4111-8592-42cd-948c-795aff30524f","Type":"ContainerStarted","Data":"2af1e461ea80659be2059413e5e74de7f5c53902dbac93d5fa4e171b64dd41c9"} Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.225953 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcprh\" (UniqueName: \"kubernetes.io/projected/84bf76b1-be1c-4809-9c81-59fea69e437f-kube-api-access-pcprh\") pod \"redhat-operators-xfshq\" (UID: \"84bf76b1-be1c-4809-9c81-59fea69e437f\") " pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.312170 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.312954 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:50.353931 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:51.211956 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vpdt" event={"ID":"c5444979-363c-4b8d-b800-4a9e31d78f3c","Type":"ContainerStarted","Data":"11e50ab98c911a120e2aeadcb1ceb09acc840bf5f7fc77531121ae666ad2aa87"} Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:51.380177 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f4hbf" podUID="b4318f72-acf3-465f-b420-daaa8efd5061" containerName="registry-server" probeResult="failure" output=< Jan 23 08:01:01 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:01:01 crc kubenswrapper[4982]: > Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.193281 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5vr5"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.195226 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.203662 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5vr5"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.339080 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37b5a5b-1991-419f-bdf3-106f3b6579d3-catalog-content\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.339233 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csflq\" (UniqueName: \"kubernetes.io/projected/a37b5a5b-1991-419f-bdf3-106f3b6579d3-kube-api-access-csflq\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.339278 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37b5a5b-1991-419f-bdf3-106f3b6579d3-utilities\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.407723 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9tb5h"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.410127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.422190 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tb5h"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.441040 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csflq\" (UniqueName: \"kubernetes.io/projected/a37b5a5b-1991-419f-bdf3-106f3b6579d3-kube-api-access-csflq\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.441109 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37b5a5b-1991-419f-bdf3-106f3b6579d3-utilities\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.441239 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37b5a5b-1991-419f-bdf3-106f3b6579d3-catalog-content\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.442113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a37b5a5b-1991-419f-bdf3-106f3b6579d3-catalog-content\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.443517 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a37b5a5b-1991-419f-bdf3-106f3b6579d3-utilities\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.477106 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csflq\" (UniqueName: \"kubernetes.io/projected/a37b5a5b-1991-419f-bdf3-106f3b6579d3-kube-api-access-csflq\") pod \"community-operators-z5vr5\" (UID: \"a37b5a5b-1991-419f-bdf3-106f3b6579d3\") " pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.529982 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.543331 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrmr\" (UniqueName: \"kubernetes.io/projected/db97be89-b4ed-437e-8834-b2ef2de2abc7-kube-api-access-xcrmr\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.543409 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db97be89-b4ed-437e-8834-b2ef2de2abc7-utilities\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.543474 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db97be89-b4ed-437e-8834-b2ef2de2abc7-catalog-content\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.644757 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db97be89-b4ed-437e-8834-b2ef2de2abc7-catalog-content\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.644847 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcrmr\" (UniqueName: \"kubernetes.io/projected/db97be89-b4ed-437e-8834-b2ef2de2abc7-kube-api-access-xcrmr\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.644882 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db97be89-b4ed-437e-8834-b2ef2de2abc7-utilities\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.645417 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db97be89-b4ed-437e-8834-b2ef2de2abc7-utilities\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.645693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db97be89-b4ed-437e-8834-b2ef2de2abc7-catalog-content\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.667860 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcrmr\" (UniqueName: \"kubernetes.io/projected/db97be89-b4ed-437e-8834-b2ef2de2abc7-kube-api-access-xcrmr\") pod \"redhat-operators-9tb5h\" (UID: \"db97be89-b4ed-437e-8834-b2ef2de2abc7\") " pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:52.745851 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:53.255765 4982 generic.go:334] "Generic (PLEG): container finished" podID="c5444979-363c-4b8d-b800-4a9e31d78f3c" containerID="11e50ab98c911a120e2aeadcb1ceb09acc840bf5f7fc77531121ae666ad2aa87" exitCode=0 Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:53.255824 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vpdt" event={"ID":"c5444979-363c-4b8d-b800-4a9e31d78f3c","Type":"ContainerDied","Data":"11e50ab98c911a120e2aeadcb1ceb09acc840bf5f7fc77531121ae666ad2aa87"} Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:53.379042 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6fq6t" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.599519 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ckdqx"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.600964 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.617740 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckdqx"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.791386 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be123c4f-514d-4704-9814-e01766eeb909-utilities\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.791436 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be123c4f-514d-4704-9814-e01766eeb909-catalog-content\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.792021 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pws9l\" (UniqueName: \"kubernetes.io/projected/be123c4f-514d-4704-9814-e01766eeb909-kube-api-access-pws9l\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.794140 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ch4z"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.796115 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.813384 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ch4z"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.893759 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be123c4f-514d-4704-9814-e01766eeb909-utilities\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.893816 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be123c4f-514d-4704-9814-e01766eeb909-catalog-content\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.893974 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pws9l\" (UniqueName: \"kubernetes.io/projected/be123c4f-514d-4704-9814-e01766eeb909-kube-api-access-pws9l\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.894374 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be123c4f-514d-4704-9814-e01766eeb909-utilities\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.894492 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be123c4f-514d-4704-9814-e01766eeb909-catalog-content\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.918219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pws9l\" (UniqueName: \"kubernetes.io/projected/be123c4f-514d-4704-9814-e01766eeb909-kube-api-access-pws9l\") pod \"community-operators-ckdqx\" (UID: \"be123c4f-514d-4704-9814-e01766eeb909\") " pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.962298 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.996313 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgs9r\" (UniqueName: \"kubernetes.io/projected/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-kube-api-access-jgs9r\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.996393 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-utilities\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:54.996788 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-catalog-content\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:55.098284 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-catalog-content\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:55.098406 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgs9r\" (UniqueName: \"kubernetes.io/projected/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-kube-api-access-jgs9r\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:55.098458 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-utilities\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:55.099147 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-catalog-content\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:55.099296 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-utilities\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:55.133189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgs9r\" (UniqueName: \"kubernetes.io/projected/b4adcec3-f2ed-44e1-b9e0-7cf162f397a4-kube-api-access-jgs9r\") pod \"redhat-operators-6ch4z\" (UID: \"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4\") " pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:55.416559 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:56.028766 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:56.104583 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w9mg8" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:56.997804 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bmvlq"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.000147 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.017666 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmvlq"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.133990 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aeb299-0e42-4d42-bc18-b410ce52234d-utilities\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.134197 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7tw\" (UniqueName: \"kubernetes.io/projected/c0aeb299-0e42-4d42-bc18-b410ce52234d-kube-api-access-wn7tw\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.134258 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aeb299-0e42-4d42-bc18-b410ce52234d-catalog-content\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.199849 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4vg68"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.202632 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.219555 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vg68"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.236678 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7tw\" (UniqueName: \"kubernetes.io/projected/c0aeb299-0e42-4d42-bc18-b410ce52234d-kube-api-access-wn7tw\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.236783 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aeb299-0e42-4d42-bc18-b410ce52234d-catalog-content\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.236893 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aeb299-0e42-4d42-bc18-b410ce52234d-utilities\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.237819 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0aeb299-0e42-4d42-bc18-b410ce52234d-utilities\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.237979 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0aeb299-0e42-4d42-bc18-b410ce52234d-catalog-content\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.270987 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7tw\" (UniqueName: \"kubernetes.io/projected/c0aeb299-0e42-4d42-bc18-b410ce52234d-kube-api-access-wn7tw\") pod \"community-operators-bmvlq\" (UID: \"c0aeb299-0e42-4d42-bc18-b410ce52234d\") " pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.338772 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948db098-f443-480e-b708-f2593a88e10a-catalog-content\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.338879 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nst4l\" (UniqueName: \"kubernetes.io/projected/948db098-f443-480e-b708-f2593a88e10a-kube-api-access-nst4l\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.338946 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948db098-f443-480e-b708-f2593a88e10a-utilities\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.354507 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.440064 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948db098-f443-480e-b708-f2593a88e10a-catalog-content\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.440110 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nst4l\" (UniqueName: \"kubernetes.io/projected/948db098-f443-480e-b708-f2593a88e10a-kube-api-access-nst4l\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.440147 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948db098-f443-480e-b708-f2593a88e10a-utilities\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.440668 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/948db098-f443-480e-b708-f2593a88e10a-utilities\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.440787 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/948db098-f443-480e-b708-f2593a88e10a-catalog-content\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.457481 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nst4l\" (UniqueName: \"kubernetes.io/projected/948db098-f443-480e-b708-f2593a88e10a-kube-api-access-nst4l\") pod \"redhat-operators-4vg68\" (UID: \"948db098-f443-480e-b708-f2593a88e10a\") " pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:57.529266 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.302154 4982 generic.go:334] "Generic (PLEG): container finished" podID="3e8b4111-8592-42cd-948c-795aff30524f" containerID="2af1e461ea80659be2059413e5e74de7f5c53902dbac93d5fa4e171b64dd41c9" exitCode=0 Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.302276 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmwvp" event={"ID":"3e8b4111-8592-42cd-948c-795aff30524f","Type":"ContainerDied","Data":"2af1e461ea80659be2059413e5e74de7f5c53902dbac93d5fa4e171b64dd41c9"} Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.304572 4982 generic.go:334] "Generic (PLEG): container finished" podID="05772d57-547c-4312-9662-be516b1eff34" containerID="798cc60cc3a9957d0a98ce656efcb17540e2f76a9abdb7a33be0fc2ac556693c" exitCode=0 Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.304701 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqp77" event={"ID":"05772d57-547c-4312-9662-be516b1eff34","Type":"ContainerDied","Data":"798cc60cc3a9957d0a98ce656efcb17540e2f76a9abdb7a33be0fc2ac556693c"} Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.396283 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4n4bp"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.398472 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.423774 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4n4bp"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.589821 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5466e8-4630-426e-afa1-f7d0605cfe72-utilities\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.589928 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7md\" (UniqueName: \"kubernetes.io/projected/ef5466e8-4630-426e-afa1-f7d0605cfe72-kube-api-access-mc7md\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.590081 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5466e8-4630-426e-afa1-f7d0605cfe72-catalog-content\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.600790 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fgt87"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.603059 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.612373 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fgt87"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.691353 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5466e8-4630-426e-afa1-f7d0605cfe72-catalog-content\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.691777 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5466e8-4630-426e-afa1-f7d0605cfe72-utilities\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.691930 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7md\" (UniqueName: \"kubernetes.io/projected/ef5466e8-4630-426e-afa1-f7d0605cfe72-kube-api-access-mc7md\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.692015 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5466e8-4630-426e-afa1-f7d0605cfe72-catalog-content\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.692275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5466e8-4630-426e-afa1-f7d0605cfe72-utilities\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.711707 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7md\" (UniqueName: \"kubernetes.io/projected/ef5466e8-4630-426e-afa1-f7d0605cfe72-kube-api-access-mc7md\") pod \"community-operators-4n4bp\" (UID: \"ef5466e8-4630-426e-afa1-f7d0605cfe72\") " pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.781512 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.793039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd8cca65-3429-44d6-8da9-3aaf3dda8202-utilities\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.793087 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6k7p\" (UniqueName: \"kubernetes.io/projected/dd8cca65-3429-44d6-8da9-3aaf3dda8202-kube-api-access-t6k7p\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.793216 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd8cca65-3429-44d6-8da9-3aaf3dda8202-catalog-content\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.895417 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd8cca65-3429-44d6-8da9-3aaf3dda8202-utilities\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.895511 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6k7p\" (UniqueName: \"kubernetes.io/projected/dd8cca65-3429-44d6-8da9-3aaf3dda8202-kube-api-access-t6k7p\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.895700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd8cca65-3429-44d6-8da9-3aaf3dda8202-catalog-content\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.897788 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd8cca65-3429-44d6-8da9-3aaf3dda8202-utilities\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.897891 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd8cca65-3429-44d6-8da9-3aaf3dda8202-catalog-content\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:00:59.935215 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6k7p\" (UniqueName: \"kubernetes.io/projected/dd8cca65-3429-44d6-8da9-3aaf3dda8202-kube-api-access-t6k7p\") pod \"redhat-operators-fgt87\" (UID: \"dd8cca65-3429-44d6-8da9-3aaf3dda8202\") " pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:00.225432 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:00.377023 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:00.427444 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f4hbf" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.793989 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xwh47"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.795565 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.806968 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwh47"] Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.829685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/534fe1ce-1e6b-4bc4-9091-a6b04d390502-catalog-content\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.829736 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/534fe1ce-1e6b-4bc4-9091-a6b04d390502-utilities\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.829763 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fntz\" (UniqueName: \"kubernetes.io/projected/534fe1ce-1e6b-4bc4-9091-a6b04d390502-kube-api-access-5fntz\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.931724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fntz\" (UniqueName: \"kubernetes.io/projected/534fe1ce-1e6b-4bc4-9091-a6b04d390502-kube-api-access-5fntz\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.932241 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/534fe1ce-1e6b-4bc4-9091-a6b04d390502-catalog-content\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.932280 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/534fe1ce-1e6b-4bc4-9091-a6b04d390502-utilities\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.932854 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/534fe1ce-1e6b-4bc4-9091-a6b04d390502-utilities\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.932854 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/534fe1ce-1e6b-4bc4-9091-a6b04d390502-catalog-content\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:01 crc kubenswrapper[4982]: I0123 08:01:01.962807 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fntz\" (UniqueName: \"kubernetes.io/projected/534fe1ce-1e6b-4bc4-9091-a6b04d390502-kube-api-access-5fntz\") pod \"community-operators-xwh47\" (UID: \"534fe1ce-1e6b-4bc4-9091-a6b04d390502\") " pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:01.993459 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9g4lb"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:01.995162 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.034219 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084d777f-fe2d-498d-b0a7-1cc1e17bf620-catalog-content\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.034287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246c5\" (UniqueName: \"kubernetes.io/projected/084d777f-fe2d-498d-b0a7-1cc1e17bf620-kube-api-access-246c5\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.034347 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084d777f-fe2d-498d-b0a7-1cc1e17bf620-utilities\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.055208 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9g4lb"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.114215 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.138882 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084d777f-fe2d-498d-b0a7-1cc1e17bf620-catalog-content\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.138944 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-246c5\" (UniqueName: \"kubernetes.io/projected/084d777f-fe2d-498d-b0a7-1cc1e17bf620-kube-api-access-246c5\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.139021 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084d777f-fe2d-498d-b0a7-1cc1e17bf620-utilities\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.139536 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/084d777f-fe2d-498d-b0a7-1cc1e17bf620-catalog-content\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.139548 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/084d777f-fe2d-498d-b0a7-1cc1e17bf620-utilities\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.161954 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-246c5\" (UniqueName: \"kubernetes.io/projected/084d777f-fe2d-498d-b0a7-1cc1e17bf620-kube-api-access-246c5\") pod \"redhat-operators-9g4lb\" (UID: \"084d777f-fe2d-498d-b0a7-1cc1e17bf620\") " pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.347389 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vpdt" event={"ID":"c5444979-363c-4b8d-b800-4a9e31d78f3c","Type":"ContainerStarted","Data":"9841cc3d3708ed9c55f3ef8a99d8c2fb7713247538371b1feae9e49930856316"} Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.356785 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fxtrd" event={"ID":"79b862fe-c362-4b57-a793-0ef76d91ae90","Type":"ContainerStarted","Data":"8c619ad7216311ddc28fbb7f76e9a68e011a460a8c031a1562ca3672d4c5ddea"} Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.374065 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vpdt" podStartSLOduration=3.029216659 podStartE2EDuration="17.374031347s" podCreationTimestamp="2026-01-23 08:00:45 +0000 UTC" firstStartedPulling="2026-01-23 08:00:47.155308235 +0000 UTC m=+321.635671621" lastFinishedPulling="2026-01-23 08:01:01.500122923 +0000 UTC m=+335.980486309" observedRunningTime="2026-01-23 08:01:02.373912674 +0000 UTC m=+336.854276050" watchObservedRunningTime="2026-01-23 08:01:02.374031347 +0000 UTC m=+336.854394743" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.388984 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.395471 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fxtrd" podStartSLOduration=3.282932937 podStartE2EDuration="20.39545116s" podCreationTimestamp="2026-01-23 08:00:42 +0000 UTC" firstStartedPulling="2026-01-23 08:00:44.120653002 +0000 UTC m=+318.601016378" lastFinishedPulling="2026-01-23 08:01:01.233171215 +0000 UTC m=+335.713534601" observedRunningTime="2026-01-23 08:01:02.39470513 +0000 UTC m=+336.875068516" watchObservedRunningTime="2026-01-23 08:01:02.39545116 +0000 UTC m=+336.875814536" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.520933 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xwh47"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.592536 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xfshq"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.614644 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4n4bp"] Jan 23 08:01:02 crc kubenswrapper[4982]: W0123 08:01:02.616046 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84bf76b1_be1c_4809_9c81_59fea69e437f.slice/crio-9fc921ac3526ec5a6486d7a81230581aa88de7ec851414c546931f53a778b17f WatchSource:0}: Error finding container 9fc921ac3526ec5a6486d7a81230581aa88de7ec851414c546931f53a778b17f: Status 404 returned error can't find the container with id 9fc921ac3526ec5a6486d7a81230581aa88de7ec851414c546931f53a778b17f Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.644020 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5vr5"] Jan 23 08:01:02 crc kubenswrapper[4982]: W0123 08:01:02.649976 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda37b5a5b_1991_419f_bdf3_106f3b6579d3.slice/crio-9157d301c79afe54e3118f9abb309b7035ad88396ac399087b57436569738cfa WatchSource:0}: Error finding container 9157d301c79afe54e3118f9abb309b7035ad88396ac399087b57436569738cfa: Status 404 returned error can't find the container with id 9157d301c79afe54e3118f9abb309b7035ad88396ac399087b57436569738cfa Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.660866 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8xw2q"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.672077 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ckdqx"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.676862 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfcbn"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.681514 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ch4z"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.685102 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vg68"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.688917 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fgt87"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.696379 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tb5h"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.704475 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmvlq"] Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.710684 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.710804 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:01:02 crc kubenswrapper[4982]: I0123 08:01:02.713445 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9g4lb"] Jan 23 08:01:02 crc kubenswrapper[4982]: W0123 08:01:02.719201 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0aeb299_0e42_4d42_bc18_b410ce52234d.slice/crio-355e74eb695db9ec9baad8524bff47e67277891fe5cbc552f663246edeb2247e WatchSource:0}: Error finding container 355e74eb695db9ec9baad8524bff47e67277891fe5cbc552f663246edeb2247e: Status 404 returned error can't find the container with id 355e74eb695db9ec9baad8524bff47e67277891fe5cbc552f663246edeb2247e Jan 23 08:01:02 crc kubenswrapper[4982]: W0123 08:01:02.721250 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd8cca65_3429_44d6_8da9_3aaf3dda8202.slice/crio-ed34c2af3b1edf4b9ef493e58d8c5de83d042e843c48bb3951ed4a90b15262bc WatchSource:0}: Error finding container ed34c2af3b1edf4b9ef493e58d8c5de83d042e843c48bb3951ed4a90b15262bc: Status 404 returned error can't find the container with id ed34c2af3b1edf4b9ef493e58d8c5de83d042e843c48bb3951ed4a90b15262bc Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.372228 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vg68" event={"ID":"948db098-f443-480e-b708-f2593a88e10a","Type":"ContainerStarted","Data":"6f535d645df625356f480c468d90201dc1864c8fde791571a37e6f6c077ae583"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.378889 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ch4z" event={"ID":"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4","Type":"ContainerStarted","Data":"41abae7b7a512629d67fcb756a6c52924ee41f2bcea7b89520a124252b9b4007"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.380222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n4bp" event={"ID":"ef5466e8-4630-426e-afa1-f7d0605cfe72","Type":"ContainerStarted","Data":"2a61af702ab24fcbbe3b1f0cec67b238981612a81bbc0ecb20871268d5fa2f32"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.382868 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgt87" event={"ID":"dd8cca65-3429-44d6-8da9-3aaf3dda8202","Type":"ContainerStarted","Data":"ed34c2af3b1edf4b9ef493e58d8c5de83d042e843c48bb3951ed4a90b15262bc"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.384868 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5vr5" event={"ID":"a37b5a5b-1991-419f-bdf3-106f3b6579d3","Type":"ContainerStarted","Data":"9157d301c79afe54e3118f9abb309b7035ad88396ac399087b57436569738cfa"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.386004 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g4lb" event={"ID":"084d777f-fe2d-498d-b0a7-1cc1e17bf620","Type":"ContainerStarted","Data":"2f72e1b7da1dc96c13d53c9849cea8e9b8442384cf841d466142bef183763c5c"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.387158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckdqx" event={"ID":"be123c4f-514d-4704-9814-e01766eeb909","Type":"ContainerStarted","Data":"61276a129a035e36d19d28baaf4747d11c7a568321b87505d0b41130a4903478"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.388103 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcbn" event={"ID":"13e99bc1-f91c-410e-a0bb-64924929a071","Type":"ContainerStarted","Data":"a9d9255d64be69b50eff550aab6898e5ed2cf0974a9960f2c4f8fe40eefa02be"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.388847 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" event={"ID":"1b32be9a-afba-4a2c-b758-09f026b04f4d","Type":"ContainerStarted","Data":"b1bca68fbfbf8ab4831a3355de6f2e3b45a3f56844cbbdba78caf0d67fab2fc8"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.390084 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwh47" event={"ID":"534fe1ce-1e6b-4bc4-9091-a6b04d390502","Type":"ContainerStarted","Data":"dbac19f3d64e0a9f9ee3300023718352d1248ccda443094ca3ebb5c0c4ffb41d"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.391367 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tb5h" event={"ID":"db97be89-b4ed-437e-8834-b2ef2de2abc7","Type":"ContainerStarted","Data":"972be79f48dea3ac638e621d010440853e5736e23aaf0de186ced700f53feb72"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.392800 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvlq" event={"ID":"c0aeb299-0e42-4d42-bc18-b410ce52234d","Type":"ContainerStarted","Data":"355e74eb695db9ec9baad8524bff47e67277891fe5cbc552f663246edeb2247e"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.393856 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfshq" event={"ID":"84bf76b1-be1c-4809-9c81-59fea69e437f","Type":"ContainerStarted","Data":"9fc921ac3526ec5a6486d7a81230581aa88de7ec851414c546931f53a778b17f"} Jan 23 08:01:03 crc kubenswrapper[4982]: I0123 08:01:03.775346 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fxtrd" podUID="79b862fe-c362-4b57-a793-0ef76d91ae90" containerName="registry-server" probeResult="failure" output=< Jan 23 08:01:03 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:01:03 crc kubenswrapper[4982]: > Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.195497 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k6zdf"] Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.197180 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.215261 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6zdf"] Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.381212 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-catalog-content\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.381497 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4k28\" (UniqueName: \"kubernetes.io/projected/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-kube-api-access-d4k28\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.381675 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-utilities\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.412039 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9fcd4"] Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.414981 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.427849 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fcd4"] Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.484061 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-catalog-content\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.484143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4k28\" (UniqueName: \"kubernetes.io/projected/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-kube-api-access-d4k28\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.484195 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-utilities\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.484682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-utilities\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.485937 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-catalog-content\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.512145 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4k28\" (UniqueName: \"kubernetes.io/projected/7a301bdd-53d6-442f-8a9d-0ed62f7ffed0-kube-api-access-d4k28\") pod \"community-operators-k6zdf\" (UID: \"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0\") " pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.517744 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.585909 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdd7e8b-ca6f-4741-9801-1690e746df69-catalog-content\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.586461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdd7e8b-ca6f-4741-9801-1690e746df69-utilities\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.586580 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cccmf\" (UniqueName: \"kubernetes.io/projected/bfdd7e8b-ca6f-4741-9801-1690e746df69-kube-api-access-cccmf\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.687806 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdd7e8b-ca6f-4741-9801-1690e746df69-utilities\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.687905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cccmf\" (UniqueName: \"kubernetes.io/projected/bfdd7e8b-ca6f-4741-9801-1690e746df69-kube-api-access-cccmf\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.687937 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdd7e8b-ca6f-4741-9801-1690e746df69-catalog-content\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.688413 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfdd7e8b-ca6f-4741-9801-1690e746df69-catalog-content\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.689586 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfdd7e8b-ca6f-4741-9801-1690e746df69-utilities\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.710834 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cccmf\" (UniqueName: \"kubernetes.io/projected/bfdd7e8b-ca6f-4741-9801-1690e746df69-kube-api-access-cccmf\") pod \"redhat-operators-9fcd4\" (UID: \"bfdd7e8b-ca6f-4741-9801-1690e746df69\") " pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.809428 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k6zdf"] Jan 23 08:01:04 crc kubenswrapper[4982]: W0123 08:01:04.814838 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a301bdd_53d6_442f_8a9d_0ed62f7ffed0.slice/crio-af548ce55a52835c265fdbc6b2edbe6d9e712efd496b123ad3ab0848b72c2d93 WatchSource:0}: Error finding container af548ce55a52835c265fdbc6b2edbe6d9e712efd496b123ad3ab0848b72c2d93: Status 404 returned error can't find the container with id af548ce55a52835c265fdbc6b2edbe6d9e712efd496b123ad3ab0848b72c2d93 Jan 23 08:01:04 crc kubenswrapper[4982]: I0123 08:01:04.910665 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:05 crc kubenswrapper[4982]: I0123 08:01:05.356988 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9fcd4"] Jan 23 08:01:05 crc kubenswrapper[4982]: I0123 08:01:05.418518 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fcd4" event={"ID":"bfdd7e8b-ca6f-4741-9801-1690e746df69","Type":"ContainerStarted","Data":"f748dd0376ec0aa1410d0f9c9b40c4bc69736e5158060d9a946a77ffbb00d30c"} Jan 23 08:01:05 crc kubenswrapper[4982]: I0123 08:01:05.419979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6zdf" event={"ID":"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0","Type":"ContainerStarted","Data":"af548ce55a52835c265fdbc6b2edbe6d9e712efd496b123ad3ab0848b72c2d93"} Jan 23 08:01:05 crc kubenswrapper[4982]: I0123 08:01:05.511164 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:01:05 crc kubenswrapper[4982]: I0123 08:01:05.511222 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.429890 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ch4z" event={"ID":"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4","Type":"ContainerStarted","Data":"783d44ccedcc045eef714e88d6f4e89b2b2fcac9597aca692d51a32e2935ce3e"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.433000 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tb5h" event={"ID":"db97be89-b4ed-437e-8834-b2ef2de2abc7","Type":"ContainerStarted","Data":"9cf89a387b160370e8ecc8f047b08685e2dbdfd2564f1ef82bb95170b979d09a"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.437429 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgt87" event={"ID":"dd8cca65-3429-44d6-8da9-3aaf3dda8202","Type":"ContainerStarted","Data":"8e96f7500a4e737c705cc5de3ed4b182c77fdae619e2cb8a64fa57c4b32f8f00"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.438379 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckdqx" event={"ID":"be123c4f-514d-4704-9814-e01766eeb909","Type":"ContainerStarted","Data":"c58db03e1aaa779bd7f07f63103a0ea7d161bde23c58d6ecd60d90d8b91050df"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.440534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmwvp" event={"ID":"3e8b4111-8592-42cd-948c-795aff30524f","Type":"ContainerStarted","Data":"ce1e80658c1d72d591ad40cef201a3ed4b909bdc605383486ee92b5692ebabe0"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.442767 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vg68" event={"ID":"948db098-f443-480e-b708-f2593a88e10a","Type":"ContainerStarted","Data":"98db4504a11679f1f9dd07cf06564261d6bc567ced2c3b3d6a9d1e2a445493e5"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.444652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fcd4" event={"ID":"bfdd7e8b-ca6f-4741-9801-1690e746df69","Type":"ContainerStarted","Data":"90b6c68a32e118f5e5ade3dc8d3f0efe27d08ccedb421cf169fe6be944a9024f"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.446049 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" event={"ID":"1b32be9a-afba-4a2c-b758-09f026b04f4d","Type":"ContainerStarted","Data":"49ad47cbff242202c0cf93930dcce975219b79ff18471d7aa107778e5129b791"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.447468 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g4lb" event={"ID":"084d777f-fe2d-498d-b0a7-1cc1e17bf620","Type":"ContainerStarted","Data":"0c54d177ec15dedb9878c9c1c50fca09ee9097898454351afaad2e5523cc8938"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.449282 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvlq" event={"ID":"c0aeb299-0e42-4d42-bc18-b410ce52234d","Type":"ContainerStarted","Data":"93b4cf357a4792368ea50cb5703be36f187418d4a342386673e05f74d460f408"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.450353 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcbn" event={"ID":"13e99bc1-f91c-410e-a0bb-64924929a071","Type":"ContainerStarted","Data":"17e793484222b9a1845a0338ab98574530fbe7e25c38fa97833fed83aa6aa641"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.451556 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwh47" event={"ID":"534fe1ce-1e6b-4bc4-9091-a6b04d390502","Type":"ContainerStarted","Data":"7ad8f28e24ed85128f03878877a56975b93a9b66605dd0186afda67eef526386"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.453446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5vr5" event={"ID":"a37b5a5b-1991-419f-bdf3-106f3b6579d3","Type":"ContainerStarted","Data":"d61cb5ca176f88ab7c86de1943c575c5e990e86880d8939e5cddfbb201f4573d"} Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.551520 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vpdt" podUID="c5444979-363c-4b8d-b800-4a9e31d78f3c" containerName="registry-server" probeResult="failure" output=< Jan 23 08:01:06 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:01:06 crc kubenswrapper[4982]: > Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.592077 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dn9p9"] Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.593689 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.604463 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn9p9"] Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.749758 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbef205-368e-4072-b046-7a15121b9a88-utilities\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.750347 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbef205-368e-4072-b046-7a15121b9a88-catalog-content\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.750464 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlngf\" (UniqueName: \"kubernetes.io/projected/ffbef205-368e-4072-b046-7a15121b9a88-kube-api-access-xlngf\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.799969 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4q57"] Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.803512 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.820885 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4q57"] Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.851900 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbef205-368e-4072-b046-7a15121b9a88-utilities\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.851950 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbef205-368e-4072-b046-7a15121b9a88-catalog-content\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.851984 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlngf\" (UniqueName: \"kubernetes.io/projected/ffbef205-368e-4072-b046-7a15121b9a88-kube-api-access-xlngf\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.852691 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffbef205-368e-4072-b046-7a15121b9a88-utilities\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.852793 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffbef205-368e-4072-b046-7a15121b9a88-catalog-content\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.878956 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlngf\" (UniqueName: \"kubernetes.io/projected/ffbef205-368e-4072-b046-7a15121b9a88-kube-api-access-xlngf\") pod \"community-operators-dn9p9\" (UID: \"ffbef205-368e-4072-b046-7a15121b9a88\") " pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.933239 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.953232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8dc713-e17d-4ffa-9b86-985b4d80c024-utilities\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.953292 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8dc713-e17d-4ffa-9b86-985b4d80c024-catalog-content\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:06 crc kubenswrapper[4982]: I0123 08:01:06.953333 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnpp\" (UniqueName: \"kubernetes.io/projected/0c8dc713-e17d-4ffa-9b86-985b4d80c024-kube-api-access-7dnpp\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.057587 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8dc713-e17d-4ffa-9b86-985b4d80c024-utilities\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.058146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8dc713-e17d-4ffa-9b86-985b4d80c024-catalog-content\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.058180 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnpp\" (UniqueName: \"kubernetes.io/projected/0c8dc713-e17d-4ffa-9b86-985b4d80c024-kube-api-access-7dnpp\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.058439 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8dc713-e17d-4ffa-9b86-985b4d80c024-utilities\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.058713 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8dc713-e17d-4ffa-9b86-985b4d80c024-catalog-content\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.087865 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnpp\" (UniqueName: \"kubernetes.io/projected/0c8dc713-e17d-4ffa-9b86-985b4d80c024-kube-api-access-7dnpp\") pod \"redhat-operators-l4q57\" (UID: \"0c8dc713-e17d-4ffa-9b86-985b4d80c024\") " pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.413432 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn9p9"] Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.461359 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4adcec3-f2ed-44e1-b9e0-7cf162f397a4" containerID="783d44ccedcc045eef714e88d6f4e89b2b2fcac9597aca692d51a32e2935ce3e" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.461434 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ch4z" event={"ID":"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4","Type":"ContainerDied","Data":"783d44ccedcc045eef714e88d6f4e89b2b2fcac9597aca692d51a32e2935ce3e"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.464759 4982 generic.go:334] "Generic (PLEG): container finished" podID="a37b5a5b-1991-419f-bdf3-106f3b6579d3" containerID="d61cb5ca176f88ab7c86de1943c575c5e990e86880d8939e5cddfbb201f4573d" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.464846 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5vr5" event={"ID":"a37b5a5b-1991-419f-bdf3-106f3b6579d3","Type":"ContainerDied","Data":"d61cb5ca176f88ab7c86de1943c575c5e990e86880d8939e5cddfbb201f4573d"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.466792 4982 generic.go:334] "Generic (PLEG): container finished" podID="534fe1ce-1e6b-4bc4-9091-a6b04d390502" containerID="7ad8f28e24ed85128f03878877a56975b93a9b66605dd0186afda67eef526386" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.466854 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwh47" event={"ID":"534fe1ce-1e6b-4bc4-9091-a6b04d390502","Type":"ContainerDied","Data":"7ad8f28e24ed85128f03878877a56975b93a9b66605dd0186afda67eef526386"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.470041 4982 generic.go:334] "Generic (PLEG): container finished" podID="be123c4f-514d-4704-9814-e01766eeb909" containerID="c58db03e1aaa779bd7f07f63103a0ea7d161bde23c58d6ecd60d90d8b91050df" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.470089 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckdqx" event={"ID":"be123c4f-514d-4704-9814-e01766eeb909","Type":"ContainerDied","Data":"c58db03e1aaa779bd7f07f63103a0ea7d161bde23c58d6ecd60d90d8b91050df"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.476506 4982 generic.go:334] "Generic (PLEG): container finished" podID="84bf76b1-be1c-4809-9c81-59fea69e437f" containerID="c7a42f6ae270036f71c1a5a0afa0b2e4a91cf8ba3681f98c6a1ecc6749837681" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.476949 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfshq" event={"ID":"84bf76b1-be1c-4809-9c81-59fea69e437f","Type":"ContainerDied","Data":"c7a42f6ae270036f71c1a5a0afa0b2e4a91cf8ba3681f98c6a1ecc6749837681"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.479594 4982 generic.go:334] "Generic (PLEG): container finished" podID="3e8b4111-8592-42cd-948c-795aff30524f" containerID="ce1e80658c1d72d591ad40cef201a3ed4b909bdc605383486ee92b5692ebabe0" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.479670 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmwvp" event={"ID":"3e8b4111-8592-42cd-948c-795aff30524f","Type":"ContainerDied","Data":"ce1e80658c1d72d591ad40cef201a3ed4b909bdc605383486ee92b5692ebabe0"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.481469 4982 generic.go:334] "Generic (PLEG): container finished" podID="948db098-f443-480e-b708-f2593a88e10a" containerID="98db4504a11679f1f9dd07cf06564261d6bc567ced2c3b3d6a9d1e2a445493e5" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.481574 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vg68" event={"ID":"948db098-f443-480e-b708-f2593a88e10a","Type":"ContainerDied","Data":"98db4504a11679f1f9dd07cf06564261d6bc567ced2c3b3d6a9d1e2a445493e5"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.483961 4982 generic.go:334] "Generic (PLEG): container finished" podID="bfdd7e8b-ca6f-4741-9801-1690e746df69" containerID="90b6c68a32e118f5e5ade3dc8d3f0efe27d08ccedb421cf169fe6be944a9024f" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.484034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fcd4" event={"ID":"bfdd7e8b-ca6f-4741-9801-1690e746df69","Type":"ContainerDied","Data":"90b6c68a32e118f5e5ade3dc8d3f0efe27d08ccedb421cf169fe6be944a9024f"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.496799 4982 generic.go:334] "Generic (PLEG): container finished" podID="13e99bc1-f91c-410e-a0bb-64924929a071" containerID="17e793484222b9a1845a0338ab98574530fbe7e25c38fa97833fed83aa6aa641" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.496909 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcbn" event={"ID":"13e99bc1-f91c-410e-a0bb-64924929a071","Type":"ContainerDied","Data":"17e793484222b9a1845a0338ab98574530fbe7e25c38fa97833fed83aa6aa641"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.502645 4982 generic.go:334] "Generic (PLEG): container finished" podID="084d777f-fe2d-498d-b0a7-1cc1e17bf620" containerID="0c54d177ec15dedb9878c9c1c50fca09ee9097898454351afaad2e5523cc8938" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.502758 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g4lb" event={"ID":"084d777f-fe2d-498d-b0a7-1cc1e17bf620","Type":"ContainerDied","Data":"0c54d177ec15dedb9878c9c1c50fca09ee9097898454351afaad2e5523cc8938"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.505800 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a301bdd-53d6-442f-8a9d-0ed62f7ffed0" containerID="a66297cc0d44ec4cc18afe6ef070e8c2d0136a05a32c78b3fae86178505a0eac" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.505872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6zdf" event={"ID":"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0","Type":"ContainerDied","Data":"a66297cc0d44ec4cc18afe6ef070e8c2d0136a05a32c78b3fae86178505a0eac"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.509883 4982 generic.go:334] "Generic (PLEG): container finished" podID="ef5466e8-4630-426e-afa1-f7d0605cfe72" containerID="41a59712d589ee972292683c0965cb8b3ec2c56c6bd8109b66525bab4e1c2a83" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.509947 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n4bp" event={"ID":"ef5466e8-4630-426e-afa1-f7d0605cfe72","Type":"ContainerDied","Data":"41a59712d589ee972292683c0965cb8b3ec2c56c6bd8109b66525bab4e1c2a83"} Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.519326 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-246c5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9g4lb_openshift-marketplace(084d777f-fe2d-498d-b0a7-1cc1e17bf620): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.519422 4982 generic.go:334] "Generic (PLEG): container finished" podID="c0aeb299-0e42-4d42-bc18-b410ce52234d" containerID="93b4cf357a4792368ea50cb5703be36f187418d4a342386673e05f74d460f408" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.519476 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d4k28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k6zdf_openshift-marketplace(7a301bdd-53d6-442f-8a9d-0ed62f7ffed0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.519561 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvlq" event={"ID":"c0aeb299-0e42-4d42-bc18-b410ce52234d","Type":"ContainerDied","Data":"93b4cf357a4792368ea50cb5703be36f187418d4a342386673e05f74d460f408"} Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.520463 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-marketplace/redhat-operators-9g4lb" podUID="084d777f-fe2d-498d-b0a7-1cc1e17bf620" Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.521064 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-marketplace/community-operators-k6zdf" podUID="7a301bdd-53d6-442f-8a9d-0ed62f7ffed0" Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.521899 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wn7tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bmvlq_openshift-marketplace(c0aeb299-0e42-4d42-bc18-b410ce52234d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.522978 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-marketplace/community-operators-bmvlq" podUID="c0aeb299-0e42-4d42-bc18-b410ce52234d" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.526202 4982 generic.go:334] "Generic (PLEG): container finished" podID="dd8cca65-3429-44d6-8da9-3aaf3dda8202" containerID="8e96f7500a4e737c705cc5de3ed4b182c77fdae619e2cb8a64fa57c4b32f8f00" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.526470 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgt87" event={"ID":"dd8cca65-3429-44d6-8da9-3aaf3dda8202","Type":"ContainerDied","Data":"8e96f7500a4e737c705cc5de3ed4b182c77fdae619e2cb8a64fa57c4b32f8f00"} Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.528562 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fgt87_openshift-marketplace(dd8cca65-3429-44d6-8da9-3aaf3dda8202): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.529262 4982 generic.go:334] "Generic (PLEG): container finished" podID="db97be89-b4ed-437e-8834-b2ef2de2abc7" containerID="9cf89a387b160370e8ecc8f047b08685e2dbdfd2564f1ef82bb95170b979d09a" exitCode=0 Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.530579 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-marketplace/redhat-operators-fgt87" podUID="dd8cca65-3429-44d6-8da9-3aaf3dda8202" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.530708 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tb5h" event={"ID":"db97be89-b4ed-437e-8834-b2ef2de2abc7","Type":"ContainerDied","Data":"9cf89a387b160370e8ecc8f047b08685e2dbdfd2564f1ef82bb95170b979d09a"} Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.530775 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.531802 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcrmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9tb5h_openshift-marketplace(db97be89-b4ed-437e-8834-b2ef2de2abc7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:01:07 crc kubenswrapper[4982]: E0123 08:01:07.533087 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-marketplace/redhat-operators-9tb5h" podUID="db97be89-b4ed-437e-8834-b2ef2de2abc7" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.620598 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.826373 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" podStartSLOduration=20.826356138 podStartE2EDuration="20.826356138s" podCreationTimestamp="2026-01-23 08:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:01:07.821060816 +0000 UTC m=+342.301424192" watchObservedRunningTime="2026-01-23 08:01:07.826356138 +0000 UTC m=+342.306719514" Jan 23 08:01:07 crc kubenswrapper[4982]: I0123 08:01:07.891241 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4q57"] Jan 23 08:01:07 crc kubenswrapper[4982]: W0123 08:01:07.897905 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c8dc713_e17d_4ffa_9b86_985b4d80c024.slice/crio-db972ae3caafa6d11d32d350ab315178709673a66f5475e6b2ca4be372c4ba6c WatchSource:0}: Error finding container db972ae3caafa6d11d32d350ab315178709673a66f5475e6b2ca4be372c4ba6c: Status 404 returned error can't find the container with id db972ae3caafa6d11d32d350ab315178709673a66f5475e6b2ca4be372c4ba6c Jan 23 08:01:08 crc kubenswrapper[4982]: I0123 08:01:08.538544 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9p9" event={"ID":"ffbef205-368e-4072-b046-7a15121b9a88","Type":"ContainerStarted","Data":"36b0aa12a72d1cb37f7e81667bd0101b2d456a4f3aeb7079237ecfcf1e9ff075"} Jan 23 08:01:08 crc kubenswrapper[4982]: I0123 08:01:08.540761 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4q57" event={"ID":"0c8dc713-e17d-4ffa-9b86-985b4d80c024","Type":"ContainerStarted","Data":"db972ae3caafa6d11d32d350ab315178709673a66f5475e6b2ca4be372c4ba6c"} Jan 23 08:01:08 crc kubenswrapper[4982]: E0123 08:01:08.543891 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9tb5h" podUID="db97be89-b4ed-437e-8834-b2ef2de2abc7" Jan 23 08:01:08 crc kubenswrapper[4982]: E0123 08:01:08.544154 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fgt87" podUID="dd8cca65-3429-44d6-8da9-3aaf3dda8202" Jan 23 08:01:08 crc kubenswrapper[4982]: E0123 08:01:08.544204 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9g4lb" podUID="084d777f-fe2d-498d-b0a7-1cc1e17bf620" Jan 23 08:01:08 crc kubenswrapper[4982]: E0123 08:01:08.544314 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bmvlq" podUID="c0aeb299-0e42-4d42-bc18-b410ce52234d" Jan 23 08:01:08 crc kubenswrapper[4982]: E0123 08:01:08.544381 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-k6zdf" podUID="7a301bdd-53d6-442f-8a9d-0ed62f7ffed0" Jan 23 08:01:08 crc kubenswrapper[4982]: I0123 08:01:08.993975 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v2g84"] Jan 23 08:01:08 crc kubenswrapper[4982]: I0123 08:01:08.996853 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.021900 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2g84"] Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.096904 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419ace1d-7907-417c-9eff-14fb51d42e31-utilities\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.097053 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cllb2\" (UniqueName: \"kubernetes.io/projected/419ace1d-7907-417c-9eff-14fb51d42e31-kube-api-access-cllb2\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.097145 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419ace1d-7907-417c-9eff-14fb51d42e31-catalog-content\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.192757 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-78qnc"] Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.195004 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.198776 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419ace1d-7907-417c-9eff-14fb51d42e31-utilities\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.198844 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cllb2\" (UniqueName: \"kubernetes.io/projected/419ace1d-7907-417c-9eff-14fb51d42e31-kube-api-access-cllb2\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.198890 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419ace1d-7907-417c-9eff-14fb51d42e31-catalog-content\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.199443 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/419ace1d-7907-417c-9eff-14fb51d42e31-catalog-content\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.199757 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/419ace1d-7907-417c-9eff-14fb51d42e31-utilities\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.209566 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78qnc"] Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.237072 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cllb2\" (UniqueName: \"kubernetes.io/projected/419ace1d-7907-417c-9eff-14fb51d42e31-kube-api-access-cllb2\") pod \"community-operators-v2g84\" (UID: \"419ace1d-7907-417c-9eff-14fb51d42e31\") " pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.300902 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f658dfca-d638-4197-8ba8-bd73f08ee2bc-catalog-content\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.300982 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xvk\" (UniqueName: \"kubernetes.io/projected/f658dfca-d638-4197-8ba8-bd73f08ee2bc-kube-api-access-g9xvk\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.301008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f658dfca-d638-4197-8ba8-bd73f08ee2bc-utilities\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.402342 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f658dfca-d638-4197-8ba8-bd73f08ee2bc-catalog-content\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.402413 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9xvk\" (UniqueName: \"kubernetes.io/projected/f658dfca-d638-4197-8ba8-bd73f08ee2bc-kube-api-access-g9xvk\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.402438 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f658dfca-d638-4197-8ba8-bd73f08ee2bc-utilities\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.403764 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f658dfca-d638-4197-8ba8-bd73f08ee2bc-utilities\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.403832 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f658dfca-d638-4197-8ba8-bd73f08ee2bc-catalog-content\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.403900 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.470074 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9xvk\" (UniqueName: \"kubernetes.io/projected/f658dfca-d638-4197-8ba8-bd73f08ee2bc-kube-api-access-g9xvk\") pod \"redhat-operators-78qnc\" (UID: \"f658dfca-d638-4197-8ba8-bd73f08ee2bc\") " pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.557557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fcd4" event={"ID":"bfdd7e8b-ca6f-4741-9801-1690e746df69","Type":"ContainerStarted","Data":"f3af3d300fa9fafa4b2491c62267ce896fdc9d53ddd413ecfcb300c2016ed58b"} Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.564099 4982 generic.go:334] "Generic (PLEG): container finished" podID="0c8dc713-e17d-4ffa-9b86-985b4d80c024" containerID="cd1fbad888a7b40135c924ec3fe101af6a90d253f56aee2e3824cca36a458ae1" exitCode=0 Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.565341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4q57" event={"ID":"0c8dc713-e17d-4ffa-9b86-985b4d80c024","Type":"ContainerDied","Data":"cd1fbad888a7b40135c924ec3fe101af6a90d253f56aee2e3824cca36a458ae1"} Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.581127 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xmwvp" event={"ID":"3e8b4111-8592-42cd-948c-795aff30524f","Type":"ContainerStarted","Data":"1793e8b6a0d267ff29da028f84f3fa1b8bf6fef200b5436eeff3d3c4b488887a"} Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.591857 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqp77" event={"ID":"05772d57-547c-4312-9662-be516b1eff34","Type":"ContainerStarted","Data":"04b0c267eab522f7caee6365e7c1ffdc97f6f95c559567f76121d8cce2028296"} Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.598935 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vg68" event={"ID":"948db098-f443-480e-b708-f2593a88e10a","Type":"ContainerStarted","Data":"3f80ba6c3903167c11e680d4c6c35bf46ca0e881da5c122c89a1b4cdff1126d0"} Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.612351 4982 generic.go:334] "Generic (PLEG): container finished" podID="ffbef205-368e-4072-b046-7a15121b9a88" containerID="fdcaf75fd784c85939b2bc8b7d8f0957f7dc6107661b9f75d95ab4dcb69ebd8b" exitCode=0 Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.612401 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9p9" event={"ID":"ffbef205-368e-4072-b046-7a15121b9a88","Type":"ContainerDied","Data":"fdcaf75fd784c85939b2bc8b7d8f0957f7dc6107661b9f75d95ab4dcb69ebd8b"} Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.653024 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xmwvp" podStartSLOduration=15.063051692 podStartE2EDuration="22.652988554s" podCreationTimestamp="2026-01-23 08:00:47 +0000 UTC" firstStartedPulling="2026-01-23 08:01:01.222364896 +0000 UTC m=+335.702728292" lastFinishedPulling="2026-01-23 08:01:08.812301768 +0000 UTC m=+343.292665154" observedRunningTime="2026-01-23 08:01:09.645465993 +0000 UTC m=+344.125829379" watchObservedRunningTime="2026-01-23 08:01:09.652988554 +0000 UTC m=+344.133351940" Jan 23 08:01:09 crc kubenswrapper[4982]: I0123 08:01:09.707268 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.099351 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v2g84"] Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.119909 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78qnc"] Jan 23 08:01:10 crc kubenswrapper[4982]: E0123 08:01:10.314122 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda37b5a5b_1991_419f_bdf3_106f3b6579d3.slice/crio-c575424fec4cddada9d6025553d30dedefe73d83d71ba26b2d69a5143b1363bc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef5466e8_4630_426e_afa1_f7d0605cfe72.slice/crio-f0b239e60d7e1e3d615cd03b77c6b959b2b65fab8c588280b88f167a4afa44f3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod948db098_f443_480e_b708_f2593a88e10a.slice/crio-3f80ba6c3903167c11e680d4c6c35bf46ca0e881da5c122c89a1b4cdff1126d0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod534fe1ce_1e6b_4bc4_9091_a6b04d390502.slice/crio-36b5c3cc1474d92b62523c5a367093f35f4d2e1f6446c3b674a03f0ab8032ab1.scope\": RecentStats: unable to find data in memory cache]" Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.620706 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckdqx" event={"ID":"be123c4f-514d-4704-9814-e01766eeb909","Type":"ContainerStarted","Data":"c8111e9e86edcdab4aa7d78b109a33189356a542e340efc610846bf13bc65a4e"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.623344 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfshq" event={"ID":"84bf76b1-be1c-4809-9c81-59fea69e437f","Type":"ContainerStarted","Data":"0be5a98ce7a66fdade5a799f6616f21d192621d00b9901005917b674a515c584"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.625164 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ch4z" event={"ID":"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4","Type":"ContainerStarted","Data":"03fa7766cc8606910fcfc22e5221385b541f2ba6fa2c304f3642d4c8f4c63540"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.627675 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n4bp" event={"ID":"ef5466e8-4630-426e-afa1-f7d0605cfe72","Type":"ContainerStarted","Data":"f0b239e60d7e1e3d615cd03b77c6b959b2b65fab8c588280b88f167a4afa44f3"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.629104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2g84" event={"ID":"419ace1d-7907-417c-9eff-14fb51d42e31","Type":"ContainerStarted","Data":"f2ed906aae0c80fb429134ce8e854b25a343e9386228e6d7d6c74f0a18afa729"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.631097 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78qnc" event={"ID":"f658dfca-d638-4197-8ba8-bd73f08ee2bc","Type":"ContainerStarted","Data":"ac7a251dc2ca03c2b47783a6929d53e7a3f278027bda6d8ec8cd9fd33823dde0"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.633024 4982 generic.go:334] "Generic (PLEG): container finished" podID="05772d57-547c-4312-9662-be516b1eff34" containerID="04b0c267eab522f7caee6365e7c1ffdc97f6f95c559567f76121d8cce2028296" exitCode=0 Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.633091 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqp77" event={"ID":"05772d57-547c-4312-9662-be516b1eff34","Type":"ContainerDied","Data":"04b0c267eab522f7caee6365e7c1ffdc97f6f95c559567f76121d8cce2028296"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.638177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5vr5" event={"ID":"a37b5a5b-1991-419f-bdf3-106f3b6579d3","Type":"ContainerStarted","Data":"c575424fec4cddada9d6025553d30dedefe73d83d71ba26b2d69a5143b1363bc"} Jan 23 08:01:10 crc kubenswrapper[4982]: I0123 08:01:10.641759 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwh47" event={"ID":"534fe1ce-1e6b-4bc4-9091-a6b04d390502","Type":"ContainerStarted","Data":"36b5c3cc1474d92b62523c5a367093f35f4d2e1f6446c3b674a03f0ab8032ab1"} Jan 23 08:01:11 crc kubenswrapper[4982]: I0123 08:01:11.650905 4982 generic.go:334] "Generic (PLEG): container finished" podID="be123c4f-514d-4704-9814-e01766eeb909" containerID="c8111e9e86edcdab4aa7d78b109a33189356a542e340efc610846bf13bc65a4e" exitCode=0 Jan 23 08:01:11 crc kubenswrapper[4982]: I0123 08:01:11.651035 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckdqx" event={"ID":"be123c4f-514d-4704-9814-e01766eeb909","Type":"ContainerDied","Data":"c8111e9e86edcdab4aa7d78b109a33189356a542e340efc610846bf13bc65a4e"} Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.659290 4982 generic.go:334] "Generic (PLEG): container finished" podID="ef5466e8-4630-426e-afa1-f7d0605cfe72" containerID="f0b239e60d7e1e3d615cd03b77c6b959b2b65fab8c588280b88f167a4afa44f3" exitCode=0 Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.659356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n4bp" event={"ID":"ef5466e8-4630-426e-afa1-f7d0605cfe72","Type":"ContainerDied","Data":"f0b239e60d7e1e3d615cd03b77c6b959b2b65fab8c588280b88f167a4afa44f3"} Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.663539 4982 generic.go:334] "Generic (PLEG): container finished" podID="a37b5a5b-1991-419f-bdf3-106f3b6579d3" containerID="c575424fec4cddada9d6025553d30dedefe73d83d71ba26b2d69a5143b1363bc" exitCode=0 Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.663595 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5vr5" event={"ID":"a37b5a5b-1991-419f-bdf3-106f3b6579d3","Type":"ContainerDied","Data":"c575424fec4cddada9d6025553d30dedefe73d83d71ba26b2d69a5143b1363bc"} Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.671980 4982 generic.go:334] "Generic (PLEG): container finished" podID="534fe1ce-1e6b-4bc4-9091-a6b04d390502" containerID="36b5c3cc1474d92b62523c5a367093f35f4d2e1f6446c3b674a03f0ab8032ab1" exitCode=0 Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.672026 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwh47" event={"ID":"534fe1ce-1e6b-4bc4-9091-a6b04d390502","Type":"ContainerDied","Data":"36b5c3cc1474d92b62523c5a367093f35f4d2e1f6446c3b674a03f0ab8032ab1"} Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.757334 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:01:12 crc kubenswrapper[4982]: I0123 08:01:12.811791 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fxtrd" Jan 23 08:01:13 crc kubenswrapper[4982]: I0123 08:01:13.684803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcbn" event={"ID":"13e99bc1-f91c-410e-a0bb-64924929a071","Type":"ContainerStarted","Data":"2c3e9d0a4a36354320eed6633febdfdb9daff6cfa4f5e3b0eda4d1d968e8bedf"} Jan 23 08:01:13 crc kubenswrapper[4982]: I0123 08:01:13.690912 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2g84" event={"ID":"419ace1d-7907-417c-9eff-14fb51d42e31","Type":"ContainerStarted","Data":"87e52f9361cbfeff457478d82406ed6c20980ec95079d7a10d0578882924345a"} Jan 23 08:01:13 crc kubenswrapper[4982]: I0123 08:01:13.694048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78qnc" event={"ID":"f658dfca-d638-4197-8ba8-bd73f08ee2bc","Type":"ContainerStarted","Data":"1ee93d677cd406fb2ae6216dcf2231efa215209c779356ced8e528a7fca5d048"} Jan 23 08:01:14 crc kubenswrapper[4982]: I0123 08:01:14.700993 4982 generic.go:334] "Generic (PLEG): container finished" podID="948db098-f443-480e-b708-f2593a88e10a" containerID="3f80ba6c3903167c11e680d4c6c35bf46ca0e881da5c122c89a1b4cdff1126d0" exitCode=0 Jan 23 08:01:14 crc kubenswrapper[4982]: I0123 08:01:14.701062 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vg68" event={"ID":"948db098-f443-480e-b708-f2593a88e10a","Type":"ContainerDied","Data":"3f80ba6c3903167c11e680d4c6c35bf46ca0e881da5c122c89a1b4cdff1126d0"} Jan 23 08:01:14 crc kubenswrapper[4982]: I0123 08:01:14.703610 4982 generic.go:334] "Generic (PLEG): container finished" podID="bfdd7e8b-ca6f-4741-9801-1690e746df69" containerID="f3af3d300fa9fafa4b2491c62267ce896fdc9d53ddd413ecfcb300c2016ed58b" exitCode=0 Jan 23 08:01:14 crc kubenswrapper[4982]: I0123 08:01:14.703748 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fcd4" event={"ID":"bfdd7e8b-ca6f-4741-9801-1690e746df69","Type":"ContainerDied","Data":"f3af3d300fa9fafa4b2491c62267ce896fdc9d53ddd413ecfcb300c2016ed58b"} Jan 23 08:01:15 crc kubenswrapper[4982]: I0123 08:01:15.592485 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:01:15 crc kubenswrapper[4982]: I0123 08:01:15.657183 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vpdt" Jan 23 08:01:15 crc kubenswrapper[4982]: I0123 08:01:15.711147 4982 generic.go:334] "Generic (PLEG): container finished" podID="84bf76b1-be1c-4809-9c81-59fea69e437f" containerID="0be5a98ce7a66fdade5a799f6616f21d192621d00b9901005917b674a515c584" exitCode=0 Jan 23 08:01:15 crc kubenswrapper[4982]: I0123 08:01:15.711293 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfshq" event={"ID":"84bf76b1-be1c-4809-9c81-59fea69e437f","Type":"ContainerDied","Data":"0be5a98ce7a66fdade5a799f6616f21d192621d00b9901005917b674a515c584"} Jan 23 08:01:15 crc kubenswrapper[4982]: I0123 08:01:15.713725 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4adcec3-f2ed-44e1-b9e0-7cf162f397a4" containerID="03fa7766cc8606910fcfc22e5221385b541f2ba6fa2c304f3642d4c8f4c63540" exitCode=0 Jan 23 08:01:15 crc kubenswrapper[4982]: I0123 08:01:15.713782 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ch4z" event={"ID":"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4","Type":"ContainerDied","Data":"03fa7766cc8606910fcfc22e5221385b541f2ba6fa2c304f3642d4c8f4c63540"} Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.706419 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.709310 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.732127 4982 generic.go:334] "Generic (PLEG): container finished" podID="f658dfca-d638-4197-8ba8-bd73f08ee2bc" containerID="1ee93d677cd406fb2ae6216dcf2231efa215209c779356ced8e528a7fca5d048" exitCode=0 Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.732185 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78qnc" event={"ID":"f658dfca-d638-4197-8ba8-bd73f08ee2bc","Type":"ContainerDied","Data":"1ee93d677cd406fb2ae6216dcf2231efa215209c779356ced8e528a7fca5d048"} Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.739482 4982 generic.go:334] "Generic (PLEG): container finished" podID="13e99bc1-f91c-410e-a0bb-64924929a071" containerID="2c3e9d0a4a36354320eed6633febdfdb9daff6cfa4f5e3b0eda4d1d968e8bedf" exitCode=0 Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.739564 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcbn" event={"ID":"13e99bc1-f91c-410e-a0bb-64924929a071","Type":"ContainerDied","Data":"2c3e9d0a4a36354320eed6633febdfdb9daff6cfa4f5e3b0eda4d1d968e8bedf"} Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.742568 4982 generic.go:334] "Generic (PLEG): container finished" podID="419ace1d-7907-417c-9eff-14fb51d42e31" containerID="87e52f9361cbfeff457478d82406ed6c20980ec95079d7a10d0578882924345a" exitCode=0 Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.742620 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2g84" event={"ID":"419ace1d-7907-417c-9eff-14fb51d42e31","Type":"ContainerDied","Data":"87e52f9361cbfeff457478d82406ed6c20980ec95079d7a10d0578882924345a"} Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.800995 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:01:17 crc kubenswrapper[4982]: I0123 08:01:17.844978 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xmwvp" Jan 23 08:01:28 crc kubenswrapper[4982]: I0123 08:01:28.299417 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" Jan 23 08:01:28 crc kubenswrapper[4982]: I0123 08:01:28.371956 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6c2x9"] Jan 23 08:01:41 crc kubenswrapper[4982]: I0123 08:01:41.436939 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:01:41 crc kubenswrapper[4982]: I0123 08:01:41.438810 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:01:47 crc kubenswrapper[4982]: E0123 08:01:47.499532 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Jan 23 08:01:47 crc kubenswrapper[4982]: E0123 08:01:47.500670 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgs9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6ch4z_openshift-marketplace(b4adcec3-f2ed-44e1-b9e0-7cf162f397a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:01:47 crc kubenswrapper[4982]: E0123 08:01:47.501810 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6ch4z" podUID="b4adcec3-f2ed-44e1-b9e0-7cf162f397a4" Jan 23 08:01:48 crc kubenswrapper[4982]: I0123 08:01:48.038555 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqp77" event={"ID":"05772d57-547c-4312-9662-be516b1eff34","Type":"ContainerStarted","Data":"e5d8673ff0bcfa0805b0b69c4d949967879f406f6888ea547ce477a946db2e11"} Jan 23 08:01:48 crc kubenswrapper[4982]: I0123 08:01:48.050649 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4n4bp" event={"ID":"ef5466e8-4630-426e-afa1-f7d0605cfe72","Type":"ContainerStarted","Data":"8a7f7817a4a8146973df04b28715a62caa671ef94b9684e5864a02725a3df078"} Jan 23 08:01:48 crc kubenswrapper[4982]: I0123 08:01:48.064208 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nqp77" podStartSLOduration=14.908054369 podStartE2EDuration="1m1.064188466s" podCreationTimestamp="2026-01-23 08:00:47 +0000 UTC" firstStartedPulling="2026-01-23 08:01:01.222790248 +0000 UTC m=+335.703153634" lastFinishedPulling="2026-01-23 08:01:47.378924305 +0000 UTC m=+381.859287731" observedRunningTime="2026-01-23 08:01:48.061153375 +0000 UTC m=+382.541516771" watchObservedRunningTime="2026-01-23 08:01:48.064188466 +0000 UTC m=+382.544551852" Jan 23 08:01:48 crc kubenswrapper[4982]: I0123 08:01:48.087693 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4n4bp" podStartSLOduration=9.132810251 podStartE2EDuration="49.087667364s" podCreationTimestamp="2026-01-23 08:00:59 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.518756354 +0000 UTC m=+341.999119740" lastFinishedPulling="2026-01-23 08:01:47.473613467 +0000 UTC m=+381.953976853" observedRunningTime="2026-01-23 08:01:48.084962042 +0000 UTC m=+382.565325438" watchObservedRunningTime="2026-01-23 08:01:48.087667364 +0000 UTC m=+382.568030750" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.058412 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvlq" event={"ID":"c0aeb299-0e42-4d42-bc18-b410ce52234d","Type":"ContainerStarted","Data":"b6e5340c2e83231e726c887ff5d381d5b8734966b5dc5d2e17697f07a9966572"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.061195 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vg68" event={"ID":"948db098-f443-480e-b708-f2593a88e10a","Type":"ContainerStarted","Data":"01ee6d5e9d42da9285f03afd04956bc885e645ead718cf4dbf35ea5bcf21d3a8"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.063341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4q57" event={"ID":"0c8dc713-e17d-4ffa-9b86-985b4d80c024","Type":"ContainerStarted","Data":"f5ff34e0120e2828163c4f31ae78f2b476d3df236e6b3f58c072bd9c2fb2a404"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.065136 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2g84" event={"ID":"419ace1d-7907-417c-9eff-14fb51d42e31","Type":"ContainerStarted","Data":"c0acc23b1ace288102da4ad6ab1ecd3d3238f61c039a793d5a35ce298bcb9cd1"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.067997 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfcbn" event={"ID":"13e99bc1-f91c-410e-a0bb-64924929a071","Type":"ContainerStarted","Data":"e7e627f9d1ab7059e8cfa9305116350046142d292a9d00ed2dce8764d0b708b7"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.070048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgt87" event={"ID":"dd8cca65-3429-44d6-8da9-3aaf3dda8202","Type":"ContainerStarted","Data":"4e69af3f1d1c2abdfb042317d3fdf92db4dcf0929c0bd95003849bfcec34b781"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.072407 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g4lb" event={"ID":"084d777f-fe2d-498d-b0a7-1cc1e17bf620","Type":"ContainerStarted","Data":"8c7c415ca0158f4ad0d5c951cec333baf97a00e148a8eea87475ef539289e624"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.075106 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5vr5" event={"ID":"a37b5a5b-1991-419f-bdf3-106f3b6579d3","Type":"ContainerStarted","Data":"c5cd0bf880dea6b6159f2766defbdef1229ab2757fd4ad490520231381d8d834"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.077478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ckdqx" event={"ID":"be123c4f-514d-4704-9814-e01766eeb909","Type":"ContainerStarted","Data":"64892e56b4fa18edddcecf51e1d4be5e2453a529807927ce52288a54049b5c08"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.079776 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xfshq" event={"ID":"84bf76b1-be1c-4809-9c81-59fea69e437f","Type":"ContainerStarted","Data":"6483c6171d8c3c9f002c3c6a974c31fbe4d12d749afb41c5dcc3544e8dfff3d0"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.083651 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78qnc" event={"ID":"f658dfca-d638-4197-8ba8-bd73f08ee2bc","Type":"ContainerStarted","Data":"a9d1fb7eb5e6bf7ca085b45d8f5b5e7868aadeb6580e7a473c47d42fef339d83"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.086490 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6zdf" event={"ID":"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0","Type":"ContainerStarted","Data":"a124e039571f68f67c62af6382b7388ca509369d6c9ae8be8e2e7355e6ee02a3"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.092578 4982 generic.go:334] "Generic (PLEG): container finished" podID="ffbef205-368e-4072-b046-7a15121b9a88" containerID="d16c0abda43b5e7f9338958b358b09dcc1816d13b9a0b10c1a88c7cca26f0bbd" exitCode=0 Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.092658 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9p9" event={"ID":"ffbef205-368e-4072-b046-7a15121b9a88","Type":"ContainerDied","Data":"d16c0abda43b5e7f9338958b358b09dcc1816d13b9a0b10c1a88c7cca26f0bbd"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.098876 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9fcd4" event={"ID":"bfdd7e8b-ca6f-4741-9801-1690e746df69","Type":"ContainerStarted","Data":"15086520c16bbb62e6a308957ef8da920deecf71c5254988400f671bc899751c"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.103871 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xwh47" event={"ID":"534fe1ce-1e6b-4bc4-9091-a6b04d390502","Type":"ContainerStarted","Data":"8265f8d907559be3862852253fe5d23fabb40d8025cc70a758a5f499ae6eef8b"} Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.141595 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xfshq" podStartSLOduration=19.828708767 podStartE2EDuration="1m0.141568901s" podCreationTimestamp="2026-01-23 08:00:49 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.479117154 +0000 UTC m=+341.959480540" lastFinishedPulling="2026-01-23 08:01:47.791977268 +0000 UTC m=+382.272340674" observedRunningTime="2026-01-23 08:01:49.138767326 +0000 UTC m=+383.619131002" watchObservedRunningTime="2026-01-23 08:01:49.141568901 +0000 UTC m=+383.621932287" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.215169 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vfcbn" podStartSLOduration=19.922028381 podStartE2EDuration="1m0.215136178s" podCreationTimestamp="2026-01-23 08:00:49 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.500083694 +0000 UTC m=+341.980447080" lastFinishedPulling="2026-01-23 08:01:47.793191481 +0000 UTC m=+382.273554877" observedRunningTime="2026-01-23 08:01:49.20811722 +0000 UTC m=+383.688480606" watchObservedRunningTime="2026-01-23 08:01:49.215136178 +0000 UTC m=+383.695499564" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.299242 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ckdqx" podStartSLOduration=15.194962348 podStartE2EDuration="55.299211675s" podCreationTimestamp="2026-01-23 08:00:54 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.472208889 +0000 UTC m=+341.952572285" lastFinishedPulling="2026-01-23 08:01:47.576458226 +0000 UTC m=+382.056821612" observedRunningTime="2026-01-23 08:01:49.292363702 +0000 UTC m=+383.772727118" watchObservedRunningTime="2026-01-23 08:01:49.299211675 +0000 UTC m=+383.779575071" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.371554 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5vr5" podStartSLOduration=17.247849233 podStartE2EDuration="57.371526029s" podCreationTimestamp="2026-01-23 08:00:52 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.466322722 +0000 UTC m=+341.946686118" lastFinishedPulling="2026-01-23 08:01:47.589999528 +0000 UTC m=+382.070362914" observedRunningTime="2026-01-23 08:01:49.365441276 +0000 UTC m=+383.845804682" watchObservedRunningTime="2026-01-23 08:01:49.371526029 +0000 UTC m=+383.851889415" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.388729 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4vg68" podStartSLOduration=12.335080665 podStartE2EDuration="52.388699358s" podCreationTimestamp="2026-01-23 08:00:57 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.483678426 +0000 UTC m=+341.964041822" lastFinishedPulling="2026-01-23 08:01:47.537297119 +0000 UTC m=+382.017660515" observedRunningTime="2026-01-23 08:01:49.387166857 +0000 UTC m=+383.867530243" watchObservedRunningTime="2026-01-23 08:01:49.388699358 +0000 UTC m=+383.869062754" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.415894 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9fcd4" podStartSLOduration=5.370526422 podStartE2EDuration="45.415849964s" podCreationTimestamp="2026-01-23 08:01:04 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.492471221 +0000 UTC m=+341.972834607" lastFinishedPulling="2026-01-23 08:01:47.537794753 +0000 UTC m=+382.018158149" observedRunningTime="2026-01-23 08:01:49.415784942 +0000 UTC m=+383.896148358" watchObservedRunningTime="2026-01-23 08:01:49.415849964 +0000 UTC m=+383.896213350" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.502444 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xwh47" podStartSLOduration=8.451601148 podStartE2EDuration="48.502414107s" podCreationTimestamp="2026-01-23 08:01:01 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.467945235 +0000 UTC m=+341.948308621" lastFinishedPulling="2026-01-23 08:01:47.518758184 +0000 UTC m=+381.999121580" observedRunningTime="2026-01-23 08:01:49.498173064 +0000 UTC m=+383.978536450" watchObservedRunningTime="2026-01-23 08:01:49.502414107 +0000 UTC m=+383.982777483" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.782323 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:49 crc kubenswrapper[4982]: I0123 08:01:49.782414 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.130813 4982 generic.go:334] "Generic (PLEG): container finished" podID="c0aeb299-0e42-4d42-bc18-b410ce52234d" containerID="b6e5340c2e83231e726c887ff5d381d5b8734966b5dc5d2e17697f07a9966572" exitCode=0 Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.130933 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvlq" event={"ID":"c0aeb299-0e42-4d42-bc18-b410ce52234d","Type":"ContainerDied","Data":"b6e5340c2e83231e726c887ff5d381d5b8734966b5dc5d2e17697f07a9966572"} Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.133216 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.133334 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.139642 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a301bdd-53d6-442f-8a9d-0ed62f7ffed0" containerID="a124e039571f68f67c62af6382b7388ca509369d6c9ae8be8e2e7355e6ee02a3" exitCode=0 Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.139724 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6zdf" event={"ID":"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0","Type":"ContainerDied","Data":"a124e039571f68f67c62af6382b7388ca509369d6c9ae8be8e2e7355e6ee02a3"} Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.154385 4982 generic.go:334] "Generic (PLEG): container finished" podID="dd8cca65-3429-44d6-8da9-3aaf3dda8202" containerID="4e69af3f1d1c2abdfb042317d3fdf92db4dcf0929c0bd95003849bfcec34b781" exitCode=0 Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.154525 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgt87" event={"ID":"dd8cca65-3429-44d6-8da9-3aaf3dda8202","Type":"ContainerDied","Data":"4e69af3f1d1c2abdfb042317d3fdf92db4dcf0929c0bd95003849bfcec34b781"} Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.189940 4982 generic.go:334] "Generic (PLEG): container finished" podID="419ace1d-7907-417c-9eff-14fb51d42e31" containerID="c0acc23b1ace288102da4ad6ab1ecd3d3238f61c039a793d5a35ce298bcb9cd1" exitCode=0 Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.190057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2g84" event={"ID":"419ace1d-7907-417c-9eff-14fb51d42e31","Type":"ContainerDied","Data":"c0acc23b1ace288102da4ad6ab1ecd3d3238f61c039a793d5a35ce298bcb9cd1"} Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.209937 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tb5h" event={"ID":"db97be89-b4ed-437e-8834-b2ef2de2abc7","Type":"ContainerStarted","Data":"1e4def667fc18b094745758207ad9ee0b9112e624f70e860eec85f3bcca69a36"} Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.354577 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.354643 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:01:50 crc kubenswrapper[4982]: I0123 08:01:50.855334 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4n4bp" podUID="ef5466e8-4630-426e-afa1-f7d0605cfe72" containerName="registry-server" probeResult="failure" output=< Jan 23 08:01:50 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:01:50 crc kubenswrapper[4982]: > Jan 23 08:01:51 crc kubenswrapper[4982]: I0123 08:01:51.221326 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vfcbn" podUID="13e99bc1-f91c-410e-a0bb-64924929a071" containerName="registry-server" probeResult="failure" output=< Jan 23 08:01:51 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:01:51 crc kubenswrapper[4982]: > Jan 23 08:01:51 crc kubenswrapper[4982]: I0123 08:01:51.227439 4982 generic.go:334] "Generic (PLEG): container finished" podID="f658dfca-d638-4197-8ba8-bd73f08ee2bc" containerID="a9d1fb7eb5e6bf7ca085b45d8f5b5e7868aadeb6580e7a473c47d42fef339d83" exitCode=0 Jan 23 08:01:51 crc kubenswrapper[4982]: I0123 08:01:51.227569 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78qnc" event={"ID":"f658dfca-d638-4197-8ba8-bd73f08ee2bc","Type":"ContainerDied","Data":"a9d1fb7eb5e6bf7ca085b45d8f5b5e7868aadeb6580e7a473c47d42fef339d83"} Jan 23 08:01:51 crc kubenswrapper[4982]: I0123 08:01:51.238725 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9p9" event={"ID":"ffbef205-368e-4072-b046-7a15121b9a88","Type":"ContainerStarted","Data":"a37b3a32811f80e341b381b8c4fae421ace341a9ba2273cb9ee83d258650b0f6"} Jan 23 08:01:51 crc kubenswrapper[4982]: I0123 08:01:51.293983 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dn9p9" podStartSLOduration=4.145920183 podStartE2EDuration="45.293953786s" podCreationTimestamp="2026-01-23 08:01:06 +0000 UTC" firstStartedPulling="2026-01-23 08:01:09.617171386 +0000 UTC m=+344.097534772" lastFinishedPulling="2026-01-23 08:01:50.765204989 +0000 UTC m=+385.245568375" observedRunningTime="2026-01-23 08:01:51.289077675 +0000 UTC m=+385.769441061" watchObservedRunningTime="2026-01-23 08:01:51.293953786 +0000 UTC m=+385.774317172" Jan 23 08:01:51 crc kubenswrapper[4982]: I0123 08:01:51.412499 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xfshq" podUID="84bf76b1-be1c-4809-9c81-59fea69e437f" containerName="registry-server" probeResult="failure" output=< Jan 23 08:01:51 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:01:51 crc kubenswrapper[4982]: > Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.115230 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.118693 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.188693 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.246830 4982 generic.go:334] "Generic (PLEG): container finished" podID="084d777f-fe2d-498d-b0a7-1cc1e17bf620" containerID="8c7c415ca0158f4ad0d5c951cec333baf97a00e148a8eea87475ef539289e624" exitCode=0 Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.247040 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g4lb" event={"ID":"084d777f-fe2d-498d-b0a7-1cc1e17bf620","Type":"ContainerDied","Data":"8c7c415ca0158f4ad0d5c951cec333baf97a00e148a8eea87475ef539289e624"} Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.254129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgt87" event={"ID":"dd8cca65-3429-44d6-8da9-3aaf3dda8202","Type":"ContainerStarted","Data":"761e4ee57d18740d4e8ecf32951b2c6ae3666131b00b298b6a2a5ab8dfeb79b8"} Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.261993 4982 generic.go:334] "Generic (PLEG): container finished" podID="0c8dc713-e17d-4ffa-9b86-985b4d80c024" containerID="f5ff34e0120e2828163c4f31ae78f2b476d3df236e6b3f58c072bd9c2fb2a404" exitCode=0 Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.262060 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4q57" event={"ID":"0c8dc713-e17d-4ffa-9b86-985b4d80c024","Type":"ContainerDied","Data":"f5ff34e0120e2828163c4f31ae78f2b476d3df236e6b3f58c072bd9c2fb2a404"} Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.267560 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v2g84" event={"ID":"419ace1d-7907-417c-9eff-14fb51d42e31","Type":"ContainerStarted","Data":"91b02046e9a5768cb96237728c91d75fc283999c20b073dd8f5c0d9e8e8f5d19"} Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.299538 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v2g84" podStartSLOduration=18.490523669 podStartE2EDuration="44.29951517s" podCreationTimestamp="2026-01-23 08:01:08 +0000 UTC" firstStartedPulling="2026-01-23 08:01:25.202549914 +0000 UTC m=+359.682913300" lastFinishedPulling="2026-01-23 08:01:51.011541415 +0000 UTC m=+385.491904801" observedRunningTime="2026-01-23 08:01:52.297987609 +0000 UTC m=+386.778350995" watchObservedRunningTime="2026-01-23 08:01:52.29951517 +0000 UTC m=+386.779878556" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.299975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvlq" event={"ID":"c0aeb299-0e42-4d42-bc18-b410ce52234d","Type":"ContainerStarted","Data":"7fac42c50568e63691fa1b092e21a5c2fca88e40126b2307ed79c63dc19f4c52"} Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.307054 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k6zdf" event={"ID":"7a301bdd-53d6-442f-8a9d-0ed62f7ffed0","Type":"ContainerStarted","Data":"cd3853d62c598b142018617a0eb19b2cb4b32e9e722da17643ef5d69b77eb7ce"} Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.352950 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fgt87" podStartSLOduration=9.801465228 podStartE2EDuration="53.352927128s" podCreationTimestamp="2026-01-23 08:00:59 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.528419642 +0000 UTC m=+342.008783028" lastFinishedPulling="2026-01-23 08:01:51.079881542 +0000 UTC m=+385.560244928" observedRunningTime="2026-01-23 08:01:52.348011847 +0000 UTC m=+386.828375243" watchObservedRunningTime="2026-01-23 08:01:52.352927128 +0000 UTC m=+386.833290514" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.388906 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k6zdf" podStartSLOduration=4.732800212 podStartE2EDuration="48.388873979s" podCreationTimestamp="2026-01-23 08:01:04 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.519431772 +0000 UTC m=+341.999795158" lastFinishedPulling="2026-01-23 08:01:51.175505539 +0000 UTC m=+385.655868925" observedRunningTime="2026-01-23 08:01:52.380094394 +0000 UTC m=+386.860457790" watchObservedRunningTime="2026-01-23 08:01:52.388873979 +0000 UTC m=+386.869237365" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.406396 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bmvlq" podStartSLOduration=12.891607388 podStartE2EDuration="56.406377467s" podCreationTimestamp="2026-01-23 08:00:56 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.521657021 +0000 UTC m=+342.002020407" lastFinishedPulling="2026-01-23 08:01:51.0364271 +0000 UTC m=+385.516790486" observedRunningTime="2026-01-23 08:01:52.405985947 +0000 UTC m=+386.886349343" watchObservedRunningTime="2026-01-23 08:01:52.406377467 +0000 UTC m=+386.886740853" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.530197 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.530387 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:52 crc kubenswrapper[4982]: I0123 08:01:52.602518 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:53 crc kubenswrapper[4982]: I0123 08:01:53.316810 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78qnc" event={"ID":"f658dfca-d638-4197-8ba8-bd73f08ee2bc","Type":"ContainerStarted","Data":"1acb9cee32132aaa8e07bb7b56dbdb00496ea39e38608821b079edb6f469a9ba"} Jan 23 08:01:53 crc kubenswrapper[4982]: I0123 08:01:53.329325 4982 generic.go:334] "Generic (PLEG): container finished" podID="db97be89-b4ed-437e-8834-b2ef2de2abc7" containerID="1e4def667fc18b094745758207ad9ee0b9112e624f70e860eec85f3bcca69a36" exitCode=0 Jan 23 08:01:53 crc kubenswrapper[4982]: I0123 08:01:53.329427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tb5h" event={"ID":"db97be89-b4ed-437e-8834-b2ef2de2abc7","Type":"ContainerDied","Data":"1e4def667fc18b094745758207ad9ee0b9112e624f70e860eec85f3bcca69a36"} Jan 23 08:01:53 crc kubenswrapper[4982]: I0123 08:01:53.345291 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-78qnc" podStartSLOduration=17.361509398 podStartE2EDuration="44.345269218s" podCreationTimestamp="2026-01-23 08:01:09 +0000 UTC" firstStartedPulling="2026-01-23 08:01:25.203189541 +0000 UTC m=+359.683552927" lastFinishedPulling="2026-01-23 08:01:52.186949361 +0000 UTC m=+386.667312747" observedRunningTime="2026-01-23 08:01:53.340191232 +0000 UTC m=+387.820554638" watchObservedRunningTime="2026-01-23 08:01:53.345269218 +0000 UTC m=+387.825632604" Jan 23 08:01:53 crc kubenswrapper[4982]: I0123 08:01:53.408497 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xwh47" Jan 23 08:01:53 crc kubenswrapper[4982]: I0123 08:01:53.412922 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" containerID="cri-o://a3aa3212599f88db990d91f467df6e5ed91b337af71408a781a831576ab18118" gracePeriod=30 Jan 23 08:01:53 crc kubenswrapper[4982]: I0123 08:01:53.413084 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5vr5" Jan 23 08:01:54 crc kubenswrapper[4982]: I0123 08:01:54.517999 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:54 crc kubenswrapper[4982]: I0123 08:01:54.518733 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:54 crc kubenswrapper[4982]: I0123 08:01:54.583904 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:01:54 crc kubenswrapper[4982]: I0123 08:01:54.911178 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:54 crc kubenswrapper[4982]: I0123 08:01:54.911249 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:01:54 crc kubenswrapper[4982]: I0123 08:01:54.963339 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:54 crc kubenswrapper[4982]: I0123 08:01:54.963424 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:55 crc kubenswrapper[4982]: I0123 08:01:55.020146 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:55 crc kubenswrapper[4982]: I0123 08:01:55.390588 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ckdqx" Jan 23 08:01:55 crc kubenswrapper[4982]: I0123 08:01:55.956281 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9fcd4" podUID="bfdd7e8b-ca6f-4741-9801-1690e746df69" containerName="registry-server" probeResult="failure" output=< Jan 23 08:01:55 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:01:55 crc kubenswrapper[4982]: > Jan 23 08:01:56 crc kubenswrapper[4982]: I0123 08:01:56.933763 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:56 crc kubenswrapper[4982]: I0123 08:01:56.933828 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:56 crc kubenswrapper[4982]: I0123 08:01:56.991922 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.355460 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.355534 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.411605 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.530043 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.530143 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.573168 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.918306 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.918360 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:01:57 crc kubenswrapper[4982]: I0123 08:01:57.974117 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.404542 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.405767 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.455481 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.707567 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.707685 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.749313 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.836142 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:01:59 crc kubenswrapper[4982]: I0123 08:01:59.880482 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4n4bp" Jan 23 08:02:00 crc kubenswrapper[4982]: I0123 08:02:00.190777 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:02:00 crc kubenswrapper[4982]: I0123 08:02:00.226317 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:02:00 crc kubenswrapper[4982]: I0123 08:02:00.226397 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:02:00 crc kubenswrapper[4982]: I0123 08:02:00.250320 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vfcbn" Jan 23 08:02:00 crc kubenswrapper[4982]: I0123 08:02:00.281904 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:02:00 crc kubenswrapper[4982]: I0123 08:02:00.391457 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:02:00 crc kubenswrapper[4982]: I0123 08:02:00.435922 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xfshq" Jan 23 08:02:02 crc kubenswrapper[4982]: I0123 08:02:02.314302 4982 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-6c2x9 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Jan 23 08:02:02 crc kubenswrapper[4982]: I0123 08:02:02.314699 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Jan 23 08:02:04 crc kubenswrapper[4982]: I0123 08:02:04.594177 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k6zdf" Jan 23 08:02:04 crc kubenswrapper[4982]: I0123 08:02:04.953975 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:02:05 crc kubenswrapper[4982]: I0123 08:02:05.031437 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9fcd4" Jan 23 08:02:06 crc kubenswrapper[4982]: I0123 08:02:06.990289 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dn9p9" Jan 23 08:02:07 crc kubenswrapper[4982]: I0123 08:02:07.424424 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bmvlq" Jan 23 08:02:07 crc kubenswrapper[4982]: I0123 08:02:07.590886 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4vg68" Jan 23 08:02:07 crc kubenswrapper[4982]: I0123 08:02:07.976972 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nqp77" Jan 23 08:02:09 crc kubenswrapper[4982]: I0123 08:02:09.482439 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v2g84" Jan 23 08:02:09 crc kubenswrapper[4982]: I0123 08:02:09.775930 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-78qnc" Jan 23 08:02:10 crc kubenswrapper[4982]: I0123 08:02:10.299508 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fgt87" Jan 23 08:02:11 crc kubenswrapper[4982]: I0123 08:02:11.436702 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:02:11 crc kubenswrapper[4982]: I0123 08:02:11.436789 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:02:12 crc kubenswrapper[4982]: I0123 08:02:12.314483 4982 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-6c2x9 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Jan 23 08:02:12 crc kubenswrapper[4982]: I0123 08:02:12.314593 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Jan 23 08:02:22 crc kubenswrapper[4982]: E0123 08:02:22.294443 4982 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.468s" Jan 23 08:02:22 crc kubenswrapper[4982]: I0123 08:02:22.316138 4982 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-6c2x9 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" start-of-body= Jan 23 08:02:22 crc kubenswrapper[4982]: I0123 08:02:22.316219 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": dial tcp 10.217.0.15:5000: connect: connection refused" Jan 23 08:02:22 crc kubenswrapper[4982]: I0123 08:02:22.320875 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 08:02:23 crc kubenswrapper[4982]: I0123 08:02:23.016409 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-697d97f7c8-6c2x9_75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f/registry/0.log" Jan 23 08:02:23 crc kubenswrapper[4982]: I0123 08:02:23.016510 4982 generic.go:334] "Generic (PLEG): container finished" podID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerID="a3aa3212599f88db990d91f467df6e5ed91b337af71408a781a831576ab18118" exitCode=-1 Jan 23 08:02:23 crc kubenswrapper[4982]: I0123 08:02:23.016865 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" event={"ID":"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f","Type":"ContainerDied","Data":"a3aa3212599f88db990d91f467df6e5ed91b337af71408a781a831576ab18118"} Jan 23 08:02:37 crc kubenswrapper[4982]: I0123 08:02:37.313836 4982 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-6c2x9 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 08:02:37 crc kubenswrapper[4982]: I0123 08:02:37.314491 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.436424 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.437197 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.437276 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.438231 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bc7054b31b8357316441eebf4f999fc630e05a95e4ee23929f60ab52fc5577f"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.438331 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://7bc7054b31b8357316441eebf4f999fc630e05a95e4ee23929f60ab52fc5577f" gracePeriod=600 Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.799999 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957138 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-tls\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957481 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-ca-trust-extracted\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957516 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-trusted-ca\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957547 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-bound-sa-token\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957581 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-certificates\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957778 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957803 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-installation-pull-secrets\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.957837 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xww6c\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-kube-api-access-xww6c\") pod \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\" (UID: \"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f\") " Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.959430 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.960185 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.969501 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.969572 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.973043 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-kube-api-access-xww6c" (OuterVolumeSpecName: "kube-api-access-xww6c") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "kube-api-access-xww6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.975124 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.985291 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 08:02:41 crc kubenswrapper[4982]: I0123 08:02:41.989383 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" (UID: "75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.059237 4982 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.059282 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xww6c\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-kube-api-access-xww6c\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.059296 4982 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.059310 4982 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.059324 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.059334 4982 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.059344 4982 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.193032 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.193057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6c2x9" event={"ID":"75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f","Type":"ContainerDied","Data":"fc8fe6bf96b5a8b40dcaddbc9d997ded4e2783fd4ae26017a179bd7b46f9fa78"} Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.193837 4982 scope.go:117] "RemoveContainer" containerID="a3aa3212599f88db990d91f467df6e5ed91b337af71408a781a831576ab18118" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.197073 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="7bc7054b31b8357316441eebf4f999fc630e05a95e4ee23929f60ab52fc5577f" exitCode=0 Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.197129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"7bc7054b31b8357316441eebf4f999fc630e05a95e4ee23929f60ab52fc5577f"} Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.197154 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"ee99b5f60e92b405232080c1205e0aafb07531f737cb696db384041538e4ab08"} Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.212186 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9g4lb" event={"ID":"084d777f-fe2d-498d-b0a7-1cc1e17bf620","Type":"ContainerStarted","Data":"49cbb960b323702230d1481c9a4e47d51afe11cc9a5ffb8e124021ebf4057f48"} Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.216783 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4q57" event={"ID":"0c8dc713-e17d-4ffa-9b86-985b4d80c024","Type":"ContainerStarted","Data":"42505aec634608a7b5ae5fba169cde8529ebda94cf8a5a2130efde76d9d92935"} Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.219030 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tb5h" event={"ID":"db97be89-b4ed-437e-8834-b2ef2de2abc7","Type":"ContainerStarted","Data":"7b6ca379bf3ef2dca61e349436be9791594bd5347aca9c90210229219920bc0c"} Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.253746 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6c2x9"] Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.253812 4982 scope.go:117] "RemoveContainer" containerID="3fae3e8808d0d1b3655f03a285b9bde9a1155c9e20bd52694a0418ec28dad68a" Jan 23 08:02:42 crc kubenswrapper[4982]: I0123 08:02:42.260015 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6c2x9"] Jan 23 08:02:43 crc kubenswrapper[4982]: I0123 08:02:43.228034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ch4z" event={"ID":"b4adcec3-f2ed-44e1-b9e0-7cf162f397a4","Type":"ContainerStarted","Data":"b1486a3cd80d4323e086828e9941f588e85679cc376b7fe8a02a08f2ae4ae748"} Jan 23 08:02:43 crc kubenswrapper[4982]: I0123 08:02:43.279037 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ch4z" podStartSLOduration=14.44587709 podStartE2EDuration="1m49.279020693s" podCreationTimestamp="2026-01-23 08:00:54 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.462820038 +0000 UTC m=+341.943183424" lastFinishedPulling="2026-01-23 08:02:42.295963641 +0000 UTC m=+436.776327027" observedRunningTime="2026-01-23 08:02:43.269746755 +0000 UTC m=+437.750110161" watchObservedRunningTime="2026-01-23 08:02:43.279020693 +0000 UTC m=+437.759384079" Jan 23 08:02:43 crc kubenswrapper[4982]: I0123 08:02:43.343965 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9tb5h" podStartSLOduration=17.1284628 podStartE2EDuration="1m51.343944609s" podCreationTimestamp="2026-01-23 08:00:52 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.531676599 +0000 UTC m=+342.012040005" lastFinishedPulling="2026-01-23 08:02:41.747158428 +0000 UTC m=+436.227521814" observedRunningTime="2026-01-23 08:02:43.3428653 +0000 UTC m=+437.823228706" watchObservedRunningTime="2026-01-23 08:02:43.343944609 +0000 UTC m=+437.824307995" Jan 23 08:02:43 crc kubenswrapper[4982]: I0123 08:02:43.369547 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4q57" podStartSLOduration=5.186514174 podStartE2EDuration="1m37.369524923s" podCreationTimestamp="2026-01-23 08:01:06 +0000 UTC" firstStartedPulling="2026-01-23 08:01:09.575932044 +0000 UTC m=+344.056295430" lastFinishedPulling="2026-01-23 08:02:41.758942773 +0000 UTC m=+436.239306179" observedRunningTime="2026-01-23 08:02:43.367587381 +0000 UTC m=+437.847951017" watchObservedRunningTime="2026-01-23 08:02:43.369524923 +0000 UTC m=+437.849888309" Jan 23 08:02:43 crc kubenswrapper[4982]: I0123 08:02:43.392341 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9g4lb" podStartSLOduration=8.154038384 podStartE2EDuration="1m42.392318592s" podCreationTimestamp="2026-01-23 08:01:01 +0000 UTC" firstStartedPulling="2026-01-23 08:01:07.519203066 +0000 UTC m=+341.999566452" lastFinishedPulling="2026-01-23 08:02:41.757483274 +0000 UTC m=+436.237846660" observedRunningTime="2026-01-23 08:02:43.387266087 +0000 UTC m=+437.867629463" watchObservedRunningTime="2026-01-23 08:02:43.392318592 +0000 UTC m=+437.872681978" Jan 23 08:02:43 crc kubenswrapper[4982]: I0123 08:02:43.853958 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" path="/var/lib/kubelet/pods/75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f/volumes" Jan 23 08:02:45 crc kubenswrapper[4982]: I0123 08:02:45.417759 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:02:45 crc kubenswrapper[4982]: I0123 08:02:45.419243 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:02:46 crc kubenswrapper[4982]: I0123 08:02:46.458462 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6ch4z" podUID="b4adcec3-f2ed-44e1-b9e0-7cf162f397a4" containerName="registry-server" probeResult="failure" output=< Jan 23 08:02:46 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:02:46 crc kubenswrapper[4982]: > Jan 23 08:02:47 crc kubenswrapper[4982]: I0123 08:02:47.621861 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:02:47 crc kubenswrapper[4982]: I0123 08:02:47.621938 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:02:48 crc kubenswrapper[4982]: I0123 08:02:48.672815 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4q57" podUID="0c8dc713-e17d-4ffa-9b86-985b4d80c024" containerName="registry-server" probeResult="failure" output=< Jan 23 08:02:48 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:02:48 crc kubenswrapper[4982]: > Jan 23 08:02:52 crc kubenswrapper[4982]: I0123 08:02:52.390642 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:02:52 crc kubenswrapper[4982]: I0123 08:02:52.390974 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:02:52 crc kubenswrapper[4982]: I0123 08:02:52.444355 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:02:52 crc kubenswrapper[4982]: I0123 08:02:52.747249 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:02:52 crc kubenswrapper[4982]: I0123 08:02:52.747397 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:02:52 crc kubenswrapper[4982]: I0123 08:02:52.828961 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:02:53 crc kubenswrapper[4982]: I0123 08:02:53.351384 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9g4lb" Jan 23 08:02:53 crc kubenswrapper[4982]: I0123 08:02:53.351572 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9tb5h" Jan 23 08:02:55 crc kubenswrapper[4982]: I0123 08:02:55.471614 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:02:55 crc kubenswrapper[4982]: I0123 08:02:55.520272 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ch4z" Jan 23 08:02:57 crc kubenswrapper[4982]: I0123 08:02:57.673460 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:02:57 crc kubenswrapper[4982]: I0123 08:02:57.715513 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4q57" Jan 23 08:04:26 crc kubenswrapper[4982]: I0123 08:04:26.429157 4982 scope.go:117] "RemoveContainer" containerID="17cf6e260143ad6250597edf70c79e4076420af3f3888ffa058cb8055616167b" Jan 23 08:05:11 crc kubenswrapper[4982]: I0123 08:05:11.436589 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:05:11 crc kubenswrapper[4982]: I0123 08:05:11.437618 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:05:26 crc kubenswrapper[4982]: I0123 08:05:26.496455 4982 scope.go:117] "RemoveContainer" containerID="30093051b502f056151b139efaf31f0e671b1839cf18d222c2d70b25f450a61d" Jan 23 08:05:41 crc kubenswrapper[4982]: I0123 08:05:41.436214 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:05:41 crc kubenswrapper[4982]: I0123 08:05:41.437035 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:06:11 crc kubenswrapper[4982]: I0123 08:06:11.436472 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:06:11 crc kubenswrapper[4982]: I0123 08:06:11.437361 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:06:11 crc kubenswrapper[4982]: I0123 08:06:11.437462 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:06:11 crc kubenswrapper[4982]: I0123 08:06:11.438489 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee99b5f60e92b405232080c1205e0aafb07531f737cb696db384041538e4ab08"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:06:11 crc kubenswrapper[4982]: I0123 08:06:11.438603 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://ee99b5f60e92b405232080c1205e0aafb07531f737cb696db384041538e4ab08" gracePeriod=600 Jan 23 08:06:12 crc kubenswrapper[4982]: I0123 08:06:12.392520 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="ee99b5f60e92b405232080c1205e0aafb07531f737cb696db384041538e4ab08" exitCode=0 Jan 23 08:06:12 crc kubenswrapper[4982]: I0123 08:06:12.392608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"ee99b5f60e92b405232080c1205e0aafb07531f737cb696db384041538e4ab08"} Jan 23 08:06:12 crc kubenswrapper[4982]: I0123 08:06:12.393112 4982 scope.go:117] "RemoveContainer" containerID="7bc7054b31b8357316441eebf4f999fc630e05a95e4ee23929f60ab52fc5577f" Jan 23 08:06:14 crc kubenswrapper[4982]: I0123 08:06:14.413884 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"35ca4511cd3232156669a737407761628df1a5436b3552a0057dc7fc63d6e9b8"} Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.902178 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwb28"] Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.903359 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-controller" containerID="cri-o://9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" gracePeriod=30 Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.903450 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="nbdb" containerID="cri-o://87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" gracePeriod=30 Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.903527 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-node" containerID="cri-o://c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" gracePeriod=30 Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.903569 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-acl-logging" containerID="cri-o://b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" gracePeriod=30 Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.903515 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" gracePeriod=30 Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.903784 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="northd" containerID="cri-o://4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" gracePeriod=30 Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.903909 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="sbdb" containerID="cri-o://5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" gracePeriod=30 Jan 23 08:07:16 crc kubenswrapper[4982]: I0123 08:07:16.940389 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" containerID="cri-o://9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" gracePeriod=30 Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.305896 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/3.log" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.310040 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovn-acl-logging/0.log" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.310716 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovn-controller/0.log" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.311391 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.390675 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cjx8r"] Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.392034 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.392307 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.392824 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-acl-logging" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.392903 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-acl-logging" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.392999 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393065 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.393136 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="northd" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393188 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="northd" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.393259 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393315 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.393378 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393435 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.393499 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="nbdb" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393554 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="nbdb" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.393611 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kubecfg-setup" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393689 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kubecfg-setup" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.393755 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393845 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.393913 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.393966 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.394033 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-node" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.394086 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-node" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.394149 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="sbdb" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.394208 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="sbdb" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.394683 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-node" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.394759 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.394822 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.394883 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c9ecd7-a4c2-4c6f-8cda-9c6e3b3b1b9f" containerName="registry" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.394940 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="northd" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395005 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395075 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395126 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395181 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovn-acl-logging" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395235 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="sbdb" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395287 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="nbdb" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.395462 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395514 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.395570 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395619 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395817 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.395874 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerName="ovnkube-controller" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.399243 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492614 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-etc-openvswitch\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492717 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khrrv\" (UniqueName: \"kubernetes.io/projected/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-kube-api-access-khrrv\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492786 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-netns\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492802 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492825 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-var-lib-openvswitch\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492834 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492869 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-node-log\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492940 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-log-socket\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492987 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-systemd\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-node-log" (OuterVolumeSpecName: "node-log") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492998 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.492967 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-log-socket" (OuterVolumeSpecName: "log-socket") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493005 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493034 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovn-node-metrics-cert\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493173 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-slash\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493203 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-env-overrides\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493220 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-config\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493251 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-netd\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493293 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-bin\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493311 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-openvswitch\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493337 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-systemd-units\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493358 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-script-lib\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493413 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-ovn\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493447 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-kubelet\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493477 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-ovn-kubernetes\") pod \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\" (UID: \"6e15f1e6-f3ea-4b15-89c0-e867fd52646e\") " Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493826 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493872 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493897 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493919 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.493839 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494159 4982 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494178 4982 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494189 4982 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494201 4982 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494212 4982 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494225 4982 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494238 4982 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494250 4982 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494261 4982 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494264 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494295 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-slash" (OuterVolumeSpecName: "host-slash") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494394 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.494340 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.495033 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.500044 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.500727 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-kube-api-access-khrrv" (OuterVolumeSpecName: "kube-api-access-khrrv") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "kube-api-access-khrrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.515127 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6e15f1e6-f3ea-4b15-89c0-e867fd52646e" (UID: "6e15f1e6-f3ea-4b15-89c0-e867fd52646e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596017 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-slash\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596078 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-systemd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-env-overrides\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596132 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovnkube-script-lib\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596215 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-cni-netd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596237 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-log-socket\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596402 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-run-netns\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596490 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-systemd-units\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-var-lib-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.596981 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597037 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-ovn\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597096 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovnkube-config\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597159 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-node-log\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597193 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-cni-bin\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597258 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-etc-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597286 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-kubelet\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597321 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597459 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-run-ovn-kubernetes\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovn-node-metrics-cert\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597528 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhmd\" (UniqueName: \"kubernetes.io/projected/91d3d6c2-6dd5-48b6-b378-77b61e30463a-kube-api-access-grhmd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597670 4982 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597691 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597704 4982 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597714 4982 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597725 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597734 4982 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597745 4982 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597754 4982 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597766 4982 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597776 4982 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.597788 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khrrv\" (UniqueName: \"kubernetes.io/projected/6e15f1e6-f3ea-4b15-89c0-e867fd52646e-kube-api-access-khrrv\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.699364 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-cni-netd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.699924 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-log-socket\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.699578 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-cni-netd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700067 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-run-netns\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.699989 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-run-netns\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700173 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-systemd-units\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700234 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-var-lib-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700266 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-systemd-units\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700288 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-log-socket\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700349 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700505 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-var-lib-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700746 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-ovn\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700851 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovnkube-config\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700867 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-ovn\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700903 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-node-log\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.700962 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-node-log\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701000 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-cni-bin\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701036 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-etc-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701065 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-kubelet\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-cni-bin\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701116 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-etc-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701125 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-kubelet\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701129 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-run-ovn-kubernetes\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-run-ovn-kubernetes\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-openvswitch\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701289 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovn-node-metrics-cert\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701323 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhmd\" (UniqueName: \"kubernetes.io/projected/91d3d6c2-6dd5-48b6-b378-77b61e30463a-kube-api-access-grhmd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701361 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-slash\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701394 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-systemd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701444 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-host-slash\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701476 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91d3d6c2-6dd5-48b6-b378-77b61e30463a-run-systemd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701481 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-env-overrides\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701511 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovnkube-script-lib\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.701777 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovnkube-config\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.702234 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-env-overrides\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.702472 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovnkube-script-lib\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.710069 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91d3d6c2-6dd5-48b6-b378-77b61e30463a-ovn-node-metrics-cert\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.720782 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhmd\" (UniqueName: \"kubernetes.io/projected/91d3d6c2-6dd5-48b6-b378-77b61e30463a-kube-api-access-grhmd\") pod \"ovnkube-node-cjx8r\" (UID: \"91d3d6c2-6dd5-48b6-b378-77b61e30463a\") " pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.731040 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.979540 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"f186c2f26dc51a9ec065ab35d20ab2007529eb272d6a4a4b870134b1924e382a"} Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.982891 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/2.log" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.983353 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/1.log" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.983407 4982 generic.go:334] "Generic (PLEG): container finished" podID="a3e50013-d63d-4e43-924e-74237cd4a690" containerID="9f750412f1b719e6e62b43d562b2fabbbf5cf8ac3de13de29735a7bf9167655c" exitCode=2 Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.983470 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerDied","Data":"9f750412f1b719e6e62b43d562b2fabbbf5cf8ac3de13de29735a7bf9167655c"} Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.983519 4982 scope.go:117] "RemoveContainer" containerID="1b11f82de5d7fa217dd88dae35592ab742b503ac1f23a35c3f01f1ea33b5a05b" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.984411 4982 scope.go:117] "RemoveContainer" containerID="9f750412f1b719e6e62b43d562b2fabbbf5cf8ac3de13de29735a7bf9167655c" Jan 23 08:07:17 crc kubenswrapper[4982]: E0123 08:07:17.984845 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gfvtq_openshift-multus(a3e50013-d63d-4e43-924e-74237cd4a690)\"" pod="openshift-multus/multus-gfvtq" podUID="a3e50013-d63d-4e43-924e-74237cd4a690" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.989923 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovnkube-controller/3.log" Jan 23 08:07:17 crc kubenswrapper[4982]: I0123 08:07:17.999109 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovn-acl-logging/0.log" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000099 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwb28_6e15f1e6-f3ea-4b15-89c0-e867fd52646e/ovn-controller/0.log" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000666 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" exitCode=0 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000704 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" exitCode=0 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000736 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" exitCode=0 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000746 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" exitCode=0 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000759 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" exitCode=0 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000769 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" exitCode=0 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000779 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" exitCode=143 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000791 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" containerID="9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" exitCode=143 Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000782 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000842 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000866 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000883 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000894 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000917 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000930 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000947 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000953 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000959 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000965 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000972 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000978 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000984 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000990 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.000995 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001012 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001020 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001027 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001035 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001042 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001048 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001055 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001063 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001069 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001075 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001083 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001093 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001101 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001107 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001112 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001118 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001125 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001132 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001137 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001143 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001149 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001157 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwb28" event={"ID":"6e15f1e6-f3ea-4b15-89c0-e867fd52646e","Type":"ContainerDied","Data":"4dad7a794679558f392d5d0206e1b781f1460bf8d79c6f4259f83cc0ddd3d1fb"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001166 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001175 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001180 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001186 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001193 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001199 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001205 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001211 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001217 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.001222 4982 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.024910 4982 scope.go:117] "RemoveContainer" containerID="9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.057127 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.061503 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwb28"] Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.066163 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwb28"] Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.091246 4982 scope.go:117] "RemoveContainer" containerID="5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.113262 4982 scope.go:117] "RemoveContainer" containerID="87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.129044 4982 scope.go:117] "RemoveContainer" containerID="4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.145813 4982 scope.go:117] "RemoveContainer" containerID="a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.165374 4982 scope.go:117] "RemoveContainer" containerID="c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.182796 4982 scope.go:117] "RemoveContainer" containerID="b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.206711 4982 scope.go:117] "RemoveContainer" containerID="9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.230365 4982 scope.go:117] "RemoveContainer" containerID="f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.265106 4982 scope.go:117] "RemoveContainer" containerID="9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.267358 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": container with ID starting with 9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181 not found: ID does not exist" containerID="9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.267407 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} err="failed to get container status \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": rpc error: code = NotFound desc = could not find container \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": container with ID starting with 9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.267436 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.268008 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": container with ID starting with aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9 not found: ID does not exist" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.270793 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} err="failed to get container status \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": rpc error: code = NotFound desc = could not find container \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": container with ID starting with aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.270829 4982 scope.go:117] "RemoveContainer" containerID="5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.271436 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": container with ID starting with 5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831 not found: ID does not exist" containerID="5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.271478 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} err="failed to get container status \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": rpc error: code = NotFound desc = could not find container \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": container with ID starting with 5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.271516 4982 scope.go:117] "RemoveContainer" containerID="87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.272049 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": container with ID starting with 87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e not found: ID does not exist" containerID="87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.272079 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} err="failed to get container status \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": rpc error: code = NotFound desc = could not find container \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": container with ID starting with 87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.272103 4982 scope.go:117] "RemoveContainer" containerID="4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.272586 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": container with ID starting with 4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789 not found: ID does not exist" containerID="4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.272619 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} err="failed to get container status \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": rpc error: code = NotFound desc = could not find container \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": container with ID starting with 4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.272665 4982 scope.go:117] "RemoveContainer" containerID="a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.273044 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": container with ID starting with a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5 not found: ID does not exist" containerID="a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.273083 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} err="failed to get container status \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": rpc error: code = NotFound desc = could not find container \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": container with ID starting with a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.273123 4982 scope.go:117] "RemoveContainer" containerID="c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.274138 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": container with ID starting with c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3 not found: ID does not exist" containerID="c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.274164 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} err="failed to get container status \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": rpc error: code = NotFound desc = could not find container \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": container with ID starting with c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.274179 4982 scope.go:117] "RemoveContainer" containerID="b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.275744 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": container with ID starting with b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593 not found: ID does not exist" containerID="b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.275771 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} err="failed to get container status \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": rpc error: code = NotFound desc = could not find container \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": container with ID starting with b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.275785 4982 scope.go:117] "RemoveContainer" containerID="9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.276051 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": container with ID starting with 9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66 not found: ID does not exist" containerID="9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.276067 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} err="failed to get container status \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": rpc error: code = NotFound desc = could not find container \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": container with ID starting with 9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.276080 4982 scope.go:117] "RemoveContainer" containerID="f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a" Jan 23 08:07:18 crc kubenswrapper[4982]: E0123 08:07:18.276333 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": container with ID starting with f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a not found: ID does not exist" containerID="f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.276354 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} err="failed to get container status \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": rpc error: code = NotFound desc = could not find container \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": container with ID starting with f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.276368 4982 scope.go:117] "RemoveContainer" containerID="9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.276683 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} err="failed to get container status \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": rpc error: code = NotFound desc = could not find container \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": container with ID starting with 9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.276703 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.277395 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} err="failed to get container status \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": rpc error: code = NotFound desc = could not find container \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": container with ID starting with aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.277416 4982 scope.go:117] "RemoveContainer" containerID="5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.277743 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} err="failed to get container status \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": rpc error: code = NotFound desc = could not find container \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": container with ID starting with 5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.277770 4982 scope.go:117] "RemoveContainer" containerID="87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.278294 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} err="failed to get container status \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": rpc error: code = NotFound desc = could not find container \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": container with ID starting with 87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.278310 4982 scope.go:117] "RemoveContainer" containerID="4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.280052 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} err="failed to get container status \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": rpc error: code = NotFound desc = could not find container \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": container with ID starting with 4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.280077 4982 scope.go:117] "RemoveContainer" containerID="a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.280882 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} err="failed to get container status \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": rpc error: code = NotFound desc = could not find container \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": container with ID starting with a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.280901 4982 scope.go:117] "RemoveContainer" containerID="c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.281347 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} err="failed to get container status \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": rpc error: code = NotFound desc = could not find container \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": container with ID starting with c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.281364 4982 scope.go:117] "RemoveContainer" containerID="b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.281584 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} err="failed to get container status \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": rpc error: code = NotFound desc = could not find container \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": container with ID starting with b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.281598 4982 scope.go:117] "RemoveContainer" containerID="9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.281791 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} err="failed to get container status \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": rpc error: code = NotFound desc = could not find container \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": container with ID starting with 9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.281807 4982 scope.go:117] "RemoveContainer" containerID="f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.282905 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} err="failed to get container status \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": rpc error: code = NotFound desc = could not find container \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": container with ID starting with f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.282923 4982 scope.go:117] "RemoveContainer" containerID="9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.283351 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} err="failed to get container status \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": rpc error: code = NotFound desc = could not find container \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": container with ID starting with 9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.283369 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.283619 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} err="failed to get container status \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": rpc error: code = NotFound desc = could not find container \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": container with ID starting with aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.283710 4982 scope.go:117] "RemoveContainer" containerID="5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.283990 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} err="failed to get container status \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": rpc error: code = NotFound desc = could not find container \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": container with ID starting with 5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.284011 4982 scope.go:117] "RemoveContainer" containerID="87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.284286 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} err="failed to get container status \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": rpc error: code = NotFound desc = could not find container \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": container with ID starting with 87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.284301 4982 scope.go:117] "RemoveContainer" containerID="4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.284595 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} err="failed to get container status \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": rpc error: code = NotFound desc = could not find container \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": container with ID starting with 4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.284611 4982 scope.go:117] "RemoveContainer" containerID="a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.284850 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} err="failed to get container status \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": rpc error: code = NotFound desc = could not find container \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": container with ID starting with a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.284878 4982 scope.go:117] "RemoveContainer" containerID="c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285144 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} err="failed to get container status \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": rpc error: code = NotFound desc = could not find container \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": container with ID starting with c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285162 4982 scope.go:117] "RemoveContainer" containerID="b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285359 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} err="failed to get container status \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": rpc error: code = NotFound desc = could not find container \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": container with ID starting with b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285374 4982 scope.go:117] "RemoveContainer" containerID="9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285581 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} err="failed to get container status \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": rpc error: code = NotFound desc = could not find container \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": container with ID starting with 9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285596 4982 scope.go:117] "RemoveContainer" containerID="f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285852 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} err="failed to get container status \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": rpc error: code = NotFound desc = could not find container \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": container with ID starting with f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.285909 4982 scope.go:117] "RemoveContainer" containerID="9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.286266 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181"} err="failed to get container status \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": rpc error: code = NotFound desc = could not find container \"9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181\": container with ID starting with 9d410f3f1565447625a9a112fc53d214afc79dbe7b1c802426506a5e563a4181 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.286284 4982 scope.go:117] "RemoveContainer" containerID="aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.286632 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9"} err="failed to get container status \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": rpc error: code = NotFound desc = could not find container \"aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9\": container with ID starting with aa40993bfba743cd7fe4d467ae2e0620325469d381ae6f0909281c3b89f441e9 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.286652 4982 scope.go:117] "RemoveContainer" containerID="5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.286867 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831"} err="failed to get container status \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": rpc error: code = NotFound desc = could not find container \"5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831\": container with ID starting with 5811c17bcf6a2c3f2c009fd94414be397e5427c53d5caee2cbd466f04335d831 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.286883 4982 scope.go:117] "RemoveContainer" containerID="87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.287251 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e"} err="failed to get container status \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": rpc error: code = NotFound desc = could not find container \"87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e\": container with ID starting with 87e1259064df6a5c98e339e93b1a75aaf5f89c7fcf4ba5e7081906bac3b74e6e not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.287270 4982 scope.go:117] "RemoveContainer" containerID="4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.287549 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789"} err="failed to get container status \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": rpc error: code = NotFound desc = could not find container \"4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789\": container with ID starting with 4224238f5be73b313ebca31c5beb9adb8e477c3a3c10b7b9343cbe19890c9789 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.287568 4982 scope.go:117] "RemoveContainer" containerID="a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.287860 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5"} err="failed to get container status \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": rpc error: code = NotFound desc = could not find container \"a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5\": container with ID starting with a1380dae9b0a6fe9014d502dc70121270f1dc4acb167d37b369a6ff463d77bf5 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.287876 4982 scope.go:117] "RemoveContainer" containerID="c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.288157 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3"} err="failed to get container status \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": rpc error: code = NotFound desc = could not find container \"c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3\": container with ID starting with c06c5ae8d4fcf63524f24065e36613ce0195ee339c5ba6e8d5c6cfc0d0a2c3d3 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.288174 4982 scope.go:117] "RemoveContainer" containerID="b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.288437 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593"} err="failed to get container status \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": rpc error: code = NotFound desc = could not find container \"b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593\": container with ID starting with b22acdea108d854ce48d55cc6edbf670534e2b4abeedfa3f11aace43fd9d4593 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.288453 4982 scope.go:117] "RemoveContainer" containerID="9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.288720 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66"} err="failed to get container status \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": rpc error: code = NotFound desc = could not find container \"9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66\": container with ID starting with 9a110474d832e805ee4c1190d1343768e340d2864bee4c5ae4623f9f7623eb66 not found: ID does not exist" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.288743 4982 scope.go:117] "RemoveContainer" containerID="f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a" Jan 23 08:07:18 crc kubenswrapper[4982]: I0123 08:07:18.289037 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a"} err="failed to get container status \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": rpc error: code = NotFound desc = could not find container \"f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a\": container with ID starting with f8a49dd6b3421609be2c6a36941200be9093692bbfba453feb22468f4b22665a not found: ID does not exist" Jan 23 08:07:19 crc kubenswrapper[4982]: I0123 08:07:19.017823 4982 generic.go:334] "Generic (PLEG): container finished" podID="91d3d6c2-6dd5-48b6-b378-77b61e30463a" containerID="8d7e74eecc4393df30aeb2e48ab1cf262eae8588b80bed3f6f0a793560eff6ff" exitCode=0 Jan 23 08:07:19 crc kubenswrapper[4982]: I0123 08:07:19.017926 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerDied","Data":"8d7e74eecc4393df30aeb2e48ab1cf262eae8588b80bed3f6f0a793560eff6ff"} Jan 23 08:07:19 crc kubenswrapper[4982]: I0123 08:07:19.025273 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/2.log" Jan 23 08:07:19 crc kubenswrapper[4982]: I0123 08:07:19.837944 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e15f1e6-f3ea-4b15-89c0-e867fd52646e" path="/var/lib/kubelet/pods/6e15f1e6-f3ea-4b15-89c0-e867fd52646e/volumes" Jan 23 08:07:20 crc kubenswrapper[4982]: I0123 08:07:20.040104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"35405e3318fc9af2367d3170b6f7c2095b79831f77eaead876338fa96c75470a"} Jan 23 08:07:22 crc kubenswrapper[4982]: I0123 08:07:22.063554 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"ddade3dcc6c57119bd198254c42ab6eac2fa7539aee18e13b39262ec44f03640"} Jan 23 08:07:23 crc kubenswrapper[4982]: I0123 08:07:23.079264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"7831b59e215615f89e826cd4a63851532f56815c95819cee4d571bcba24e0bf2"} Jan 23 08:07:23 crc kubenswrapper[4982]: I0123 08:07:23.079737 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"be1334720e4936dacc49311052282fa76e7dd69fa92b856dd125f534482fdaf4"} Jan 23 08:07:23 crc kubenswrapper[4982]: I0123 08:07:23.079759 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"60ee5a1303a70f88516c54ce455a0fc791fdf48a75df4de53aa2f7f4250827b2"} Jan 23 08:07:24 crc kubenswrapper[4982]: I0123 08:07:24.091035 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"4e9c16001ee7ceaa92574c67f01f7a10f6a0d7df102dfcc8806499009d771fae"} Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.113015 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"6e105f6fd567bcebf7a818bcd4dccf8d0926fe53c668f52a3f94597d270d869d"} Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.803921 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2d8ch"] Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.806114 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.811200 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.811877 4982 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-wlphp" Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.812799 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.813120 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.970939 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-crc-storage\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.971157 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-node-mnt\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:26 crc kubenswrapper[4982]: I0123 08:07:26.971517 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2bdx\" (UniqueName: \"kubernetes.io/projected/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-kube-api-access-g2bdx\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.073364 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-node-mnt\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.073968 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2bdx\" (UniqueName: \"kubernetes.io/projected/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-kube-api-access-g2bdx\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.073832 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-node-mnt\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.074079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-crc-storage\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.074943 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-crc-storage\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.102066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2bdx\" (UniqueName: \"kubernetes.io/projected/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-kube-api-access-g2bdx\") pod \"crc-storage-crc-2d8ch\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.129160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" event={"ID":"91d3d6c2-6dd5-48b6-b378-77b61e30463a","Type":"ContainerStarted","Data":"dca9840b929f7549ef98a95a4f61b1dbd5e4f8532bd05230068492e93490ef01"} Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.129515 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.155383 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.168777 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" podStartSLOduration=10.168758175 podStartE2EDuration="10.168758175s" podCreationTimestamp="2026-01-23 08:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:07:27.166371381 +0000 UTC m=+721.646734777" watchObservedRunningTime="2026-01-23 08:07:27.168758175 +0000 UTC m=+721.649121561" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.176922 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:27 crc kubenswrapper[4982]: E0123 08:07:27.223927 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(b6f7b2485ff3e531a291cc9e8f72597c9e11b60958b62c4a98ef52b3be376cc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 08:07:27 crc kubenswrapper[4982]: E0123 08:07:27.224316 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(b6f7b2485ff3e531a291cc9e8f72597c9e11b60958b62c4a98ef52b3be376cc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: E0123 08:07:27.224412 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(b6f7b2485ff3e531a291cc9e8f72597c9e11b60958b62c4a98ef52b3be376cc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:27 crc kubenswrapper[4982]: E0123 08:07:27.224531 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2d8ch_crc-storage(a4fcaf92-86c7-45b6-83e5-e8f2fb77e687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2d8ch_crc-storage(a4fcaf92-86c7-45b6-83e5-e8f2fb77e687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(b6f7b2485ff3e531a291cc9e8f72597c9e11b60958b62c4a98ef52b3be376cc2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2d8ch" podUID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.731466 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.770454 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2d8ch"] Jan 23 08:07:27 crc kubenswrapper[4982]: I0123 08:07:27.779404 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:28 crc kubenswrapper[4982]: I0123 08:07:28.134236 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:28 crc kubenswrapper[4982]: I0123 08:07:28.134889 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:28 crc kubenswrapper[4982]: I0123 08:07:28.135302 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:28 crc kubenswrapper[4982]: E0123 08:07:28.172791 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(f4198d57f534c258c6e8c9a7ee27550583d3821c82ee9a3f2bbbdadd57d26420): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 08:07:28 crc kubenswrapper[4982]: E0123 08:07:28.172876 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(f4198d57f534c258c6e8c9a7ee27550583d3821c82ee9a3f2bbbdadd57d26420): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:28 crc kubenswrapper[4982]: E0123 08:07:28.172919 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(f4198d57f534c258c6e8c9a7ee27550583d3821c82ee9a3f2bbbdadd57d26420): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:28 crc kubenswrapper[4982]: E0123 08:07:28.172993 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2d8ch_crc-storage(a4fcaf92-86c7-45b6-83e5-e8f2fb77e687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2d8ch_crc-storage(a4fcaf92-86c7-45b6-83e5-e8f2fb77e687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(f4198d57f534c258c6e8c9a7ee27550583d3821c82ee9a3f2bbbdadd57d26420): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2d8ch" podUID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" Jan 23 08:07:29 crc kubenswrapper[4982]: I0123 08:07:29.828113 4982 scope.go:117] "RemoveContainer" containerID="9f750412f1b719e6e62b43d562b2fabbbf5cf8ac3de13de29735a7bf9167655c" Jan 23 08:07:29 crc kubenswrapper[4982]: E0123 08:07:29.829024 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gfvtq_openshift-multus(a3e50013-d63d-4e43-924e-74237cd4a690)\"" pod="openshift-multus/multus-gfvtq" podUID="a3e50013-d63d-4e43-924e-74237cd4a690" Jan 23 08:07:38 crc kubenswrapper[4982]: I0123 08:07:38.826329 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:38 crc kubenswrapper[4982]: I0123 08:07:38.827809 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:38 crc kubenswrapper[4982]: E0123 08:07:38.875375 4982 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(621df722f9617533ab620efbc55e248734489b6cf9be4156e501f11ec1305ad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 08:07:38 crc kubenswrapper[4982]: E0123 08:07:38.875450 4982 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(621df722f9617533ab620efbc55e248734489b6cf9be4156e501f11ec1305ad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:38 crc kubenswrapper[4982]: E0123 08:07:38.875481 4982 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(621df722f9617533ab620efbc55e248734489b6cf9be4156e501f11ec1305ad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:38 crc kubenswrapper[4982]: E0123 08:07:38.875542 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2d8ch_crc-storage(a4fcaf92-86c7-45b6-83e5-e8f2fb77e687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2d8ch_crc-storage(a4fcaf92-86c7-45b6-83e5-e8f2fb77e687)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2d8ch_crc-storage_a4fcaf92-86c7-45b6-83e5-e8f2fb77e687_0(621df722f9617533ab620efbc55e248734489b6cf9be4156e501f11ec1305ad4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2d8ch" podUID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" Jan 23 08:07:41 crc kubenswrapper[4982]: I0123 08:07:41.828047 4982 scope.go:117] "RemoveContainer" containerID="9f750412f1b719e6e62b43d562b2fabbbf5cf8ac3de13de29735a7bf9167655c" Jan 23 08:07:42 crc kubenswrapper[4982]: I0123 08:07:42.271494 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gfvtq_a3e50013-d63d-4e43-924e-74237cd4a690/kube-multus/2.log" Jan 23 08:07:42 crc kubenswrapper[4982]: I0123 08:07:42.271550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gfvtq" event={"ID":"a3e50013-d63d-4e43-924e-74237cd4a690","Type":"ContainerStarted","Data":"b8b3785c0e274788530ba5b8ac60fa2fea1d875d001d18ef2ed24c539a6bcd9b"} Jan 23 08:07:47 crc kubenswrapper[4982]: I0123 08:07:47.759860 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cjx8r" Jan 23 08:07:50 crc kubenswrapper[4982]: I0123 08:07:50.827152 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:50 crc kubenswrapper[4982]: I0123 08:07:50.828077 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:51 crc kubenswrapper[4982]: I0123 08:07:51.094063 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2d8ch"] Jan 23 08:07:51 crc kubenswrapper[4982]: I0123 08:07:51.103691 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:07:51 crc kubenswrapper[4982]: I0123 08:07:51.331333 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2d8ch" event={"ID":"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687","Type":"ContainerStarted","Data":"618311083cdc43ddb8ee61c6333ba0bbd68deebc719331127b2485e01f87fbac"} Jan 23 08:07:56 crc kubenswrapper[4982]: I0123 08:07:56.377292 4982 generic.go:334] "Generic (PLEG): container finished" podID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" containerID="5805d626c5595e0cd459655977bc2a4b6c2fbc057a2a92f4254ad0cbe43c1aa2" exitCode=0 Jan 23 08:07:56 crc kubenswrapper[4982]: I0123 08:07:56.377508 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2d8ch" event={"ID":"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687","Type":"ContainerDied","Data":"5805d626c5595e0cd459655977bc2a4b6c2fbc057a2a92f4254ad0cbe43c1aa2"} Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.645053 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.679169 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-crc-storage\") pod \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.679271 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-node-mnt\") pod \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.679320 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2bdx\" (UniqueName: \"kubernetes.io/projected/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-kube-api-access-g2bdx\") pod \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\" (UID: \"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687\") " Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.679471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" (UID: "a4fcaf92-86c7-45b6-83e5-e8f2fb77e687"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.680289 4982 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.683801 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-kube-api-access-g2bdx" (OuterVolumeSpecName: "kube-api-access-g2bdx") pod "a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" (UID: "a4fcaf92-86c7-45b6-83e5-e8f2fb77e687"). InnerVolumeSpecName "kube-api-access-g2bdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.696638 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" (UID: "a4fcaf92-86c7-45b6-83e5-e8f2fb77e687"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.782094 4982 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:57 crc kubenswrapper[4982]: I0123 08:07:57.782126 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2bdx\" (UniqueName: \"kubernetes.io/projected/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687-kube-api-access-g2bdx\") on node \"crc\" DevicePath \"\"" Jan 23 08:07:58 crc kubenswrapper[4982]: I0123 08:07:58.393948 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2d8ch" event={"ID":"a4fcaf92-86c7-45b6-83e5-e8f2fb77e687","Type":"ContainerDied","Data":"618311083cdc43ddb8ee61c6333ba0bbd68deebc719331127b2485e01f87fbac"} Jan 23 08:07:58 crc kubenswrapper[4982]: I0123 08:07:58.394289 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="618311083cdc43ddb8ee61c6333ba0bbd68deebc719331127b2485e01f87fbac" Jan 23 08:07:58 crc kubenswrapper[4982]: I0123 08:07:58.394344 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2d8ch" Jan 23 08:07:58 crc kubenswrapper[4982]: I0123 08:07:58.889742 4982 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.156505 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk"] Jan 23 08:08:05 crc kubenswrapper[4982]: E0123 08:08:05.157384 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" containerName="storage" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.157402 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" containerName="storage" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.157573 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" containerName="storage" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.158772 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.161577 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.168219 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk"] Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.203459 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.203533 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.203718 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngk7t\" (UniqueName: \"kubernetes.io/projected/16004dc5-ebcc-4b6e-af9a-770063f0474e-kube-api-access-ngk7t\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.305101 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngk7t\" (UniqueName: \"kubernetes.io/projected/16004dc5-ebcc-4b6e-af9a-770063f0474e-kube-api-access-ngk7t\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.305195 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.305258 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.306277 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.306606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.323498 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngk7t\" (UniqueName: \"kubernetes.io/projected/16004dc5-ebcc-4b6e-af9a-770063f0474e-kube-api-access-ngk7t\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.477069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:05 crc kubenswrapper[4982]: I0123 08:08:05.902331 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk"] Jan 23 08:08:06 crc kubenswrapper[4982]: I0123 08:08:06.475609 4982 generic.go:334] "Generic (PLEG): container finished" podID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerID="9454d24931d555cf44bc1686bb9758ee1eb9f241d9beb58872434ad953ae2673" exitCode=0 Jan 23 08:08:06 crc kubenswrapper[4982]: I0123 08:08:06.475669 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" event={"ID":"16004dc5-ebcc-4b6e-af9a-770063f0474e","Type":"ContainerDied","Data":"9454d24931d555cf44bc1686bb9758ee1eb9f241d9beb58872434ad953ae2673"} Jan 23 08:08:06 crc kubenswrapper[4982]: I0123 08:08:06.475730 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" event={"ID":"16004dc5-ebcc-4b6e-af9a-770063f0474e","Type":"ContainerStarted","Data":"3566168ec8c69b43740cc8ba00b774fecc30a2e8fa27e0c1ea448b59e815b29d"} Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.445109 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-grrc5"] Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.446742 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.450831 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grrc5"] Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.541157 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-utilities\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.541269 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-catalog-content\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.541338 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwkv7\" (UniqueName: \"kubernetes.io/projected/cbf14c37-9ab3-4ed4-9527-42498fe7c358-kube-api-access-pwkv7\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.644162 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-utilities\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.644212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-catalog-content\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.644244 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwkv7\" (UniqueName: \"kubernetes.io/projected/cbf14c37-9ab3-4ed4-9527-42498fe7c358-kube-api-access-pwkv7\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.644989 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-utilities\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.645155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-catalog-content\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.666108 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwkv7\" (UniqueName: \"kubernetes.io/projected/cbf14c37-9ab3-4ed4-9527-42498fe7c358-kube-api-access-pwkv7\") pod \"redhat-operators-grrc5\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:07 crc kubenswrapper[4982]: I0123 08:08:07.781068 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:08 crc kubenswrapper[4982]: I0123 08:08:08.020701 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-grrc5"] Jan 23 08:08:08 crc kubenswrapper[4982]: I0123 08:08:08.496972 4982 generic.go:334] "Generic (PLEG): container finished" podID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerID="fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb" exitCode=0 Jan 23 08:08:08 crc kubenswrapper[4982]: I0123 08:08:08.497529 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grrc5" event={"ID":"cbf14c37-9ab3-4ed4-9527-42498fe7c358","Type":"ContainerDied","Data":"fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb"} Jan 23 08:08:08 crc kubenswrapper[4982]: I0123 08:08:08.497577 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grrc5" event={"ID":"cbf14c37-9ab3-4ed4-9527-42498fe7c358","Type":"ContainerStarted","Data":"e99e5273deb114b549f91a5f317a37a95b9e2c8201be48c619381856910fb876"} Jan 23 08:08:09 crc kubenswrapper[4982]: I0123 08:08:09.505306 4982 generic.go:334] "Generic (PLEG): container finished" podID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerID="6e62dd391056bd308c047f864ef7863d3fa2b085d22a57264b5a84452de980ab" exitCode=0 Jan 23 08:08:09 crc kubenswrapper[4982]: I0123 08:08:09.505348 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" event={"ID":"16004dc5-ebcc-4b6e-af9a-770063f0474e","Type":"ContainerDied","Data":"6e62dd391056bd308c047f864ef7863d3fa2b085d22a57264b5a84452de980ab"} Jan 23 08:08:11 crc kubenswrapper[4982]: I0123 08:08:11.523090 4982 generic.go:334] "Generic (PLEG): container finished" podID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerID="0f5bddb819d2546e7f038f8a7bb24ea1da72d7403f983726a219fc964cbac78f" exitCode=0 Jan 23 08:08:11 crc kubenswrapper[4982]: I0123 08:08:11.523466 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" event={"ID":"16004dc5-ebcc-4b6e-af9a-770063f0474e","Type":"ContainerDied","Data":"0f5bddb819d2546e7f038f8a7bb24ea1da72d7403f983726a219fc964cbac78f"} Jan 23 08:08:11 crc kubenswrapper[4982]: I0123 08:08:11.526727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grrc5" event={"ID":"cbf14c37-9ab3-4ed4-9527-42498fe7c358","Type":"ContainerStarted","Data":"643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6"} Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.817871 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.822494 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-util\") pod \"16004dc5-ebcc-4b6e-af9a-770063f0474e\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.822560 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-bundle\") pod \"16004dc5-ebcc-4b6e-af9a-770063f0474e\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.822668 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngk7t\" (UniqueName: \"kubernetes.io/projected/16004dc5-ebcc-4b6e-af9a-770063f0474e-kube-api-access-ngk7t\") pod \"16004dc5-ebcc-4b6e-af9a-770063f0474e\" (UID: \"16004dc5-ebcc-4b6e-af9a-770063f0474e\") " Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.823122 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-bundle" (OuterVolumeSpecName: "bundle") pod "16004dc5-ebcc-4b6e-af9a-770063f0474e" (UID: "16004dc5-ebcc-4b6e-af9a-770063f0474e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.828377 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16004dc5-ebcc-4b6e-af9a-770063f0474e-kube-api-access-ngk7t" (OuterVolumeSpecName: "kube-api-access-ngk7t") pod "16004dc5-ebcc-4b6e-af9a-770063f0474e" (UID: "16004dc5-ebcc-4b6e-af9a-770063f0474e"). InnerVolumeSpecName "kube-api-access-ngk7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.833905 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-util" (OuterVolumeSpecName: "util") pod "16004dc5-ebcc-4b6e-af9a-770063f0474e" (UID: "16004dc5-ebcc-4b6e-af9a-770063f0474e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.923685 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngk7t\" (UniqueName: \"kubernetes.io/projected/16004dc5-ebcc-4b6e-af9a-770063f0474e-kube-api-access-ngk7t\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.923718 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:12 crc kubenswrapper[4982]: I0123 08:08:12.923820 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16004dc5-ebcc-4b6e-af9a-770063f0474e-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:13 crc kubenswrapper[4982]: I0123 08:08:13.545732 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" Jan 23 08:08:13 crc kubenswrapper[4982]: I0123 08:08:13.545750 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk" event={"ID":"16004dc5-ebcc-4b6e-af9a-770063f0474e","Type":"ContainerDied","Data":"3566168ec8c69b43740cc8ba00b774fecc30a2e8fa27e0c1ea448b59e815b29d"} Jan 23 08:08:13 crc kubenswrapper[4982]: I0123 08:08:13.546112 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3566168ec8c69b43740cc8ba00b774fecc30a2e8fa27e0c1ea448b59e815b29d" Jan 23 08:08:13 crc kubenswrapper[4982]: I0123 08:08:13.548347 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grrc5" event={"ID":"cbf14c37-9ab3-4ed4-9527-42498fe7c358","Type":"ContainerDied","Data":"643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6"} Jan 23 08:08:13 crc kubenswrapper[4982]: I0123 08:08:13.548277 4982 generic.go:334] "Generic (PLEG): container finished" podID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerID="643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6" exitCode=0 Jan 23 08:08:14 crc kubenswrapper[4982]: I0123 08:08:14.562593 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grrc5" event={"ID":"cbf14c37-9ab3-4ed4-9527-42498fe7c358","Type":"ContainerStarted","Data":"63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c"} Jan 23 08:08:14 crc kubenswrapper[4982]: I0123 08:08:14.580913 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-grrc5" podStartSLOduration=1.9598056499999998 podStartE2EDuration="7.580894726s" podCreationTimestamp="2026-01-23 08:08:07 +0000 UTC" firstStartedPulling="2026-01-23 08:08:08.499900562 +0000 UTC m=+762.980263948" lastFinishedPulling="2026-01-23 08:08:14.120989618 +0000 UTC m=+768.601353024" observedRunningTime="2026-01-23 08:08:14.577341061 +0000 UTC m=+769.057704437" watchObservedRunningTime="2026-01-23 08:08:14.580894726 +0000 UTC m=+769.061258122" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.419343 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rxmvn"] Jan 23 08:08:15 crc kubenswrapper[4982]: E0123 08:08:15.420046 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerName="extract" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.420060 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerName="extract" Jan 23 08:08:15 crc kubenswrapper[4982]: E0123 08:08:15.420085 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerName="util" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.420091 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerName="util" Jan 23 08:08:15 crc kubenswrapper[4982]: E0123 08:08:15.420101 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerName="pull" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.420108 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerName="pull" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.420221 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16004dc5-ebcc-4b6e-af9a-770063f0474e" containerName="extract" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.420655 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.426232 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qvgbr" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.426965 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.427000 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.434165 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rxmvn"] Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.461311 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59m6r\" (UniqueName: \"kubernetes.io/projected/f58a63b0-6c3e-4307-b336-8dacff6ec7ee-kube-api-access-59m6r\") pod \"nmstate-operator-646758c888-rxmvn\" (UID: \"f58a63b0-6c3e-4307-b336-8dacff6ec7ee\") " pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.561992 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59m6r\" (UniqueName: \"kubernetes.io/projected/f58a63b0-6c3e-4307-b336-8dacff6ec7ee-kube-api-access-59m6r\") pod \"nmstate-operator-646758c888-rxmvn\" (UID: \"f58a63b0-6c3e-4307-b336-8dacff6ec7ee\") " pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.582848 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59m6r\" (UniqueName: \"kubernetes.io/projected/f58a63b0-6c3e-4307-b336-8dacff6ec7ee-kube-api-access-59m6r\") pod \"nmstate-operator-646758c888-rxmvn\" (UID: \"f58a63b0-6c3e-4307-b336-8dacff6ec7ee\") " pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" Jan 23 08:08:15 crc kubenswrapper[4982]: I0123 08:08:15.737467 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" Jan 23 08:08:16 crc kubenswrapper[4982]: I0123 08:08:16.035160 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rxmvn"] Jan 23 08:08:16 crc kubenswrapper[4982]: I0123 08:08:16.575144 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" event={"ID":"f58a63b0-6c3e-4307-b336-8dacff6ec7ee","Type":"ContainerStarted","Data":"a65cbd0d4a413c5d2a4e621c370b6510493079ab99b4e26a2a1638c39f3e51ef"} Jan 23 08:08:17 crc kubenswrapper[4982]: I0123 08:08:17.782539 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:17 crc kubenswrapper[4982]: I0123 08:08:17.782611 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:18 crc kubenswrapper[4982]: I0123 08:08:18.829015 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-grrc5" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="registry-server" probeResult="failure" output=< Jan 23 08:08:18 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:08:18 crc kubenswrapper[4982]: > Jan 23 08:08:19 crc kubenswrapper[4982]: I0123 08:08:19.609286 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" event={"ID":"f58a63b0-6c3e-4307-b336-8dacff6ec7ee","Type":"ContainerStarted","Data":"82772137189f4243ec5f75e02af91a82558b7c4440c822f954aa86c6e793a57f"} Jan 23 08:08:19 crc kubenswrapper[4982]: I0123 08:08:19.631997 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-rxmvn" podStartSLOduration=1.5332732949999999 podStartE2EDuration="4.631974236s" podCreationTimestamp="2026-01-23 08:08:15 +0000 UTC" firstStartedPulling="2026-01-23 08:08:16.051357759 +0000 UTC m=+770.531721145" lastFinishedPulling="2026-01-23 08:08:19.15005866 +0000 UTC m=+773.630422086" observedRunningTime="2026-01-23 08:08:19.630015114 +0000 UTC m=+774.110378520" watchObservedRunningTime="2026-01-23 08:08:19.631974236 +0000 UTC m=+774.112337652" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.403339 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-89w8z"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.405896 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.410877 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.412424 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-858v2" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.412535 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.417990 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.421080 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-89w8z"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.431814 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.445351 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhzg\" (UniqueName: \"kubernetes.io/projected/1eeec26f-f857-4cfd-bc3a-54ff1e371b06-kube-api-access-ddhzg\") pod \"nmstate-metrics-54757c584b-89w8z\" (UID: \"1eeec26f-f857-4cfd-bc3a-54ff1e371b06\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.447314 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-vfwzn"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.448228 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.547447 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7rd4\" (UniqueName: \"kubernetes.io/projected/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-kube-api-access-k7rd4\") pod \"nmstate-webhook-8474b5b9d8-lp5mm\" (UID: \"f8e1934e-1ae7-4e66-8b7d-c49c55350e43\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.547785 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhzg\" (UniqueName: \"kubernetes.io/projected/1eeec26f-f857-4cfd-bc3a-54ff1e371b06-kube-api-access-ddhzg\") pod \"nmstate-metrics-54757c584b-89w8z\" (UID: \"1eeec26f-f857-4cfd-bc3a-54ff1e371b06\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.547910 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lp5mm\" (UID: \"f8e1934e-1ae7-4e66-8b7d-c49c55350e43\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.565016 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.568810 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.572335 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.572661 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.574405 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-42hfc" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.583315 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.583342 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhzg\" (UniqueName: \"kubernetes.io/projected/1eeec26f-f857-4cfd-bc3a-54ff1e371b06-kube-api-access-ddhzg\") pod \"nmstate-metrics-54757c584b-89w8z\" (UID: \"1eeec26f-f857-4cfd-bc3a-54ff1e371b06\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.650093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8zfs\" (UniqueName: \"kubernetes.io/projected/3fb4fc7d-a353-4a2f-a374-b960e7405e77-kube-api-access-z8zfs\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.650898 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lp5mm\" (UID: \"f8e1934e-1ae7-4e66-8b7d-c49c55350e43\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.650959 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-ovs-socket\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: E0123 08:08:25.651106 4982 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 08:08:25 crc kubenswrapper[4982]: E0123 08:08:25.651193 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-tls-key-pair podName:f8e1934e-1ae7-4e66-8b7d-c49c55350e43 nodeName:}" failed. No retries permitted until 2026-01-23 08:08:26.151164127 +0000 UTC m=+780.631527513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-lp5mm" (UID: "f8e1934e-1ae7-4e66-8b7d-c49c55350e43") : secret "openshift-nmstate-webhook" not found Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.651389 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-nmstate-lock\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.651445 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-dbus-socket\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.651507 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7rd4\" (UniqueName: \"kubernetes.io/projected/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-kube-api-access-k7rd4\") pod \"nmstate-webhook-8474b5b9d8-lp5mm\" (UID: \"f8e1934e-1ae7-4e66-8b7d-c49c55350e43\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.688498 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7rd4\" (UniqueName: \"kubernetes.io/projected/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-kube-api-access-k7rd4\") pod \"nmstate-webhook-8474b5b9d8-lp5mm\" (UID: \"f8e1934e-1ae7-4e66-8b7d-c49c55350e43\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.830263 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831357 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831427 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831474 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-nmstate-lock\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831508 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-dbus-socket\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831552 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8zfs\" (UniqueName: \"kubernetes.io/projected/3fb4fc7d-a353-4a2f-a374-b960e7405e77-kube-api-access-z8zfs\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831572 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26s6q\" (UniqueName: \"kubernetes.io/projected/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-kube-api-access-26s6q\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831657 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-nmstate-lock\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.831840 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-ovs-socket\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.832207 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-ovs-socket\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.832266 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3fb4fc7d-a353-4a2f-a374-b960e7405e77-dbus-socket\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.865365 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8zfs\" (UniqueName: \"kubernetes.io/projected/3fb4fc7d-a353-4a2f-a374-b960e7405e77-kube-api-access-z8zfs\") pod \"nmstate-handler-vfwzn\" (UID: \"3fb4fc7d-a353-4a2f-a374-b960e7405e77\") " pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.905980 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b869c7dd5-2qhfp"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.911338 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.930236 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b869c7dd5-2qhfp"] Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937142 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7k7f\" (UniqueName: \"kubernetes.io/projected/c25233f7-1a8e-430c-98a3-3d63bfedd61e-kube-api-access-j7k7f\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937222 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937264 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-oauth-config\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937310 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-trusted-ca-bundle\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937359 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26s6q\" (UniqueName: \"kubernetes.io/projected/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-kube-api-access-26s6q\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-service-ca\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937410 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-serving-cert\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937434 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-config\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937460 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-oauth-serving-cert\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.937526 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: E0123 08:08:25.937756 4982 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 23 08:08:25 crc kubenswrapper[4982]: E0123 08:08:25.937835 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-plugin-serving-cert podName:c9f0bd14-9c31-4898-ae22-e0890f02ffb7 nodeName:}" failed. No retries permitted until 2026-01-23 08:08:26.437812539 +0000 UTC m=+780.918175935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-zxgxq" (UID: "c9f0bd14-9c31-4898-ae22-e0890f02ffb7") : secret "plugin-serving-cert" not found Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.939603 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:25 crc kubenswrapper[4982]: I0123 08:08:25.965180 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26s6q\" (UniqueName: \"kubernetes.io/projected/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-kube-api-access-26s6q\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.039100 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7k7f\" (UniqueName: \"kubernetes.io/projected/c25233f7-1a8e-430c-98a3-3d63bfedd61e-kube-api-access-j7k7f\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.039680 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-oauth-config\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.039774 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-trusted-ca-bundle\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.041581 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-trusted-ca-bundle\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.041704 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-service-ca\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.042397 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-service-ca\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.042450 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-serving-cert\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.042985 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-config\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.043039 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-oauth-serving-cert\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.044615 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-config\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.044721 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25233f7-1a8e-430c-98a3-3d63bfedd61e-oauth-serving-cert\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.046323 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-serving-cert\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.046934 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25233f7-1a8e-430c-98a3-3d63bfedd61e-console-oauth-config\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.058584 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7k7f\" (UniqueName: \"kubernetes.io/projected/c25233f7-1a8e-430c-98a3-3d63bfedd61e-kube-api-access-j7k7f\") pod \"console-7b869c7dd5-2qhfp\" (UID: \"c25233f7-1a8e-430c-98a3-3d63bfedd61e\") " pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.085443 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.193032 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-89w8z"] Jan 23 08:08:26 crc kubenswrapper[4982]: W0123 08:08:26.201775 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eeec26f_f857_4cfd_bc3a_54ff1e371b06.slice/crio-93f9110d341c0c1a9c15a1a08d36212e4d0e60c2c2b2351819b0c8cc1c569768 WatchSource:0}: Error finding container 93f9110d341c0c1a9c15a1a08d36212e4d0e60c2c2b2351819b0c8cc1c569768: Status 404 returned error can't find the container with id 93f9110d341c0c1a9c15a1a08d36212e4d0e60c2c2b2351819b0c8cc1c569768 Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.245881 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lp5mm\" (UID: \"f8e1934e-1ae7-4e66-8b7d-c49c55350e43\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.251707 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f8e1934e-1ae7-4e66-8b7d-c49c55350e43-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-lp5mm\" (UID: \"f8e1934e-1ae7-4e66-8b7d-c49c55350e43\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.258762 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.369415 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.449328 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.454074 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9f0bd14-9c31-4898-ae22-e0890f02ffb7-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-zxgxq\" (UID: \"c9f0bd14-9c31-4898-ae22-e0890f02ffb7\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.507697 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b869c7dd5-2qhfp"] Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.513213 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-42hfc" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.522000 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.683783 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" event={"ID":"1eeec26f-f857-4cfd-bc3a-54ff1e371b06","Type":"ContainerStarted","Data":"93f9110d341c0c1a9c15a1a08d36212e4d0e60c2c2b2351819b0c8cc1c569768"} Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.686220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vfwzn" event={"ID":"3fb4fc7d-a353-4a2f-a374-b960e7405e77","Type":"ContainerStarted","Data":"e0dd7bcbb16f2dd5ed71ae8b6e1fcbca7b9b75b201c7da6f559d4f54e06d5114"} Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.687290 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b869c7dd5-2qhfp" event={"ID":"c25233f7-1a8e-430c-98a3-3d63bfedd61e","Type":"ContainerStarted","Data":"b0ff219f19db8fb7db6177c0d7db5a1941462accd3cae8c9405d43042c857da4"} Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.724419 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm"] Jan 23 08:08:26 crc kubenswrapper[4982]: I0123 08:08:26.820095 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq"] Jan 23 08:08:27 crc kubenswrapper[4982]: I0123 08:08:27.844385 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:27 crc kubenswrapper[4982]: I0123 08:08:27.901968 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:28 crc kubenswrapper[4982]: I0123 08:08:28.082987 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grrc5"] Jan 23 08:08:28 crc kubenswrapper[4982]: I0123 08:08:28.705677 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" event={"ID":"f8e1934e-1ae7-4e66-8b7d-c49c55350e43","Type":"ContainerStarted","Data":"57a41eedcd3d9b3dbf56f0834c11254691f563bd0b4b313b6d5da61ff95980fe"} Jan 23 08:08:28 crc kubenswrapper[4982]: I0123 08:08:28.707654 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" event={"ID":"c9f0bd14-9c31-4898-ae22-e0890f02ffb7","Type":"ContainerStarted","Data":"ebb6408c7d361f3cdc7621f90e09c93a23aae945639b73c0db24c5f3acf46600"} Jan 23 08:08:28 crc kubenswrapper[4982]: I0123 08:08:28.710354 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b869c7dd5-2qhfp" event={"ID":"c25233f7-1a8e-430c-98a3-3d63bfedd61e","Type":"ContainerStarted","Data":"94736052a57a12ab639c2c0057f92c6ef714f0b16869351e0a04e7c849a3d228"} Jan 23 08:08:28 crc kubenswrapper[4982]: I0123 08:08:28.732519 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b869c7dd5-2qhfp" podStartSLOduration=3.732500822 podStartE2EDuration="3.732500822s" podCreationTimestamp="2026-01-23 08:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:08:28.728222888 +0000 UTC m=+783.208586294" watchObservedRunningTime="2026-01-23 08:08:28.732500822 +0000 UTC m=+783.212864208" Jan 23 08:08:29 crc kubenswrapper[4982]: I0123 08:08:29.718283 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-grrc5" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="registry-server" containerID="cri-o://63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c" gracePeriod=2 Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.198381 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.323493 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwkv7\" (UniqueName: \"kubernetes.io/projected/cbf14c37-9ab3-4ed4-9527-42498fe7c358-kube-api-access-pwkv7\") pod \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.323574 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-catalog-content\") pod \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.323654 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-utilities\") pod \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\" (UID: \"cbf14c37-9ab3-4ed4-9527-42498fe7c358\") " Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.324744 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-utilities" (OuterVolumeSpecName: "utilities") pod "cbf14c37-9ab3-4ed4-9527-42498fe7c358" (UID: "cbf14c37-9ab3-4ed4-9527-42498fe7c358"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.340847 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf14c37-9ab3-4ed4-9527-42498fe7c358-kube-api-access-pwkv7" (OuterVolumeSpecName: "kube-api-access-pwkv7") pod "cbf14c37-9ab3-4ed4-9527-42498fe7c358" (UID: "cbf14c37-9ab3-4ed4-9527-42498fe7c358"). InnerVolumeSpecName "kube-api-access-pwkv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.424855 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.424889 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwkv7\" (UniqueName: \"kubernetes.io/projected/cbf14c37-9ab3-4ed4-9527-42498fe7c358-kube-api-access-pwkv7\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.441332 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbf14c37-9ab3-4ed4-9527-42498fe7c358" (UID: "cbf14c37-9ab3-4ed4-9527-42498fe7c358"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.526387 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf14c37-9ab3-4ed4-9527-42498fe7c358-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.729361 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-vfwzn" event={"ID":"3fb4fc7d-a353-4a2f-a374-b960e7405e77","Type":"ContainerStarted","Data":"0a399ada09c14a1a98c1679537cd228ae692d201f6ed9e88fe9ce9c5b8286ed4"} Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.729804 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.743106 4982 generic.go:334] "Generic (PLEG): container finished" podID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerID="63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c" exitCode=0 Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.743178 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grrc5" event={"ID":"cbf14c37-9ab3-4ed4-9527-42498fe7c358","Type":"ContainerDied","Data":"63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c"} Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.743208 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-grrc5" event={"ID":"cbf14c37-9ab3-4ed4-9527-42498fe7c358","Type":"ContainerDied","Data":"e99e5273deb114b549f91a5f317a37a95b9e2c8201be48c619381856910fb876"} Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.743225 4982 scope.go:117] "RemoveContainer" containerID="63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.743371 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-grrc5" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.751446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" event={"ID":"1eeec26f-f857-4cfd-bc3a-54ff1e371b06","Type":"ContainerStarted","Data":"25cc226cd82f7660634d3c1c673cc7e9ea67160364d27cb9794ba79816a13e09"} Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.753838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" event={"ID":"f8e1934e-1ae7-4e66-8b7d-c49c55350e43","Type":"ContainerStarted","Data":"6f15559e1592745f32d5fc49d4a18e4e87dade93a9de52a91cb1affd79ec40f6"} Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.753965 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.757173 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-vfwzn" podStartSLOduration=1.887140355 podStartE2EDuration="5.757136856s" podCreationTimestamp="2026-01-23 08:08:25 +0000 UTC" firstStartedPulling="2026-01-23 08:08:26.123061317 +0000 UTC m=+780.603424703" lastFinishedPulling="2026-01-23 08:08:29.993057818 +0000 UTC m=+784.473421204" observedRunningTime="2026-01-23 08:08:30.749367948 +0000 UTC m=+785.229731334" watchObservedRunningTime="2026-01-23 08:08:30.757136856 +0000 UTC m=+785.237500262" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.782740 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" podStartSLOduration=3.713909243 podStartE2EDuration="5.782715161s" podCreationTimestamp="2026-01-23 08:08:25 +0000 UTC" firstStartedPulling="2026-01-23 08:08:27.926242744 +0000 UTC m=+782.406606130" lastFinishedPulling="2026-01-23 08:08:29.995048652 +0000 UTC m=+784.475412048" observedRunningTime="2026-01-23 08:08:30.775073546 +0000 UTC m=+785.255436952" watchObservedRunningTime="2026-01-23 08:08:30.782715161 +0000 UTC m=+785.263078547" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.801486 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-grrc5"] Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.810310 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-grrc5"] Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.890883 4982 scope.go:117] "RemoveContainer" containerID="643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.949395 4982 scope.go:117] "RemoveContainer" containerID="fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.970501 4982 scope.go:117] "RemoveContainer" containerID="63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c" Jan 23 08:08:30 crc kubenswrapper[4982]: E0123 08:08:30.971005 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c\": container with ID starting with 63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c not found: ID does not exist" containerID="63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.971055 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c"} err="failed to get container status \"63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c\": rpc error: code = NotFound desc = could not find container \"63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c\": container with ID starting with 63bd3c6d9a7a92e1d3563318afc6fed15946defc3013ebbef97166ce615e8d1c not found: ID does not exist" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.971084 4982 scope.go:117] "RemoveContainer" containerID="643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6" Jan 23 08:08:30 crc kubenswrapper[4982]: E0123 08:08:30.971540 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6\": container with ID starting with 643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6 not found: ID does not exist" containerID="643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.971561 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6"} err="failed to get container status \"643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6\": rpc error: code = NotFound desc = could not find container \"643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6\": container with ID starting with 643ab780e977d7b5da85125ef02e203a9ff07336b2aebcb947d2e5d0064877d6 not found: ID does not exist" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.971575 4982 scope.go:117] "RemoveContainer" containerID="fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb" Jan 23 08:08:30 crc kubenswrapper[4982]: E0123 08:08:30.972485 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb\": container with ID starting with fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb not found: ID does not exist" containerID="fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb" Jan 23 08:08:30 crc kubenswrapper[4982]: I0123 08:08:30.972520 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb"} err="failed to get container status \"fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb\": rpc error: code = NotFound desc = could not find container \"fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb\": container with ID starting with fade8b5ac2a86853c97ffbcd5f4252882cdcebeb51e29d1ca7288a1bd62987cb not found: ID does not exist" Jan 23 08:08:31 crc kubenswrapper[4982]: I0123 08:08:31.767414 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" event={"ID":"c9f0bd14-9c31-4898-ae22-e0890f02ffb7","Type":"ContainerStarted","Data":"2ef6adab9dc3c774bc31b332588f999e272e25a1f73214aede5f21316d8623b1"} Jan 23 08:08:31 crc kubenswrapper[4982]: I0123 08:08:31.803195 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-zxgxq" podStartSLOduration=3.756552455 podStartE2EDuration="6.803178171s" podCreationTimestamp="2026-01-23 08:08:25 +0000 UTC" firstStartedPulling="2026-01-23 08:08:27.925386971 +0000 UTC m=+782.405750367" lastFinishedPulling="2026-01-23 08:08:30.972012697 +0000 UTC m=+785.452376083" observedRunningTime="2026-01-23 08:08:31.799569545 +0000 UTC m=+786.279932931" watchObservedRunningTime="2026-01-23 08:08:31.803178171 +0000 UTC m=+786.283541557" Jan 23 08:08:31 crc kubenswrapper[4982]: I0123 08:08:31.835691 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" path="/var/lib/kubelet/pods/cbf14c37-9ab3-4ed4-9527-42498fe7c358/volumes" Jan 23 08:08:33 crc kubenswrapper[4982]: I0123 08:08:33.795241 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" event={"ID":"1eeec26f-f857-4cfd-bc3a-54ff1e371b06","Type":"ContainerStarted","Data":"c4ae49b88ce86c3ed9b1232748b19b18c6d683c2ed44a74faf3b1940c8af2b2d"} Jan 23 08:08:33 crc kubenswrapper[4982]: I0123 08:08:33.834385 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-89w8z" podStartSLOduration=2.277385179 podStartE2EDuration="8.834202307s" podCreationTimestamp="2026-01-23 08:08:25 +0000 UTC" firstStartedPulling="2026-01-23 08:08:26.204500586 +0000 UTC m=+780.684863972" lastFinishedPulling="2026-01-23 08:08:32.761317714 +0000 UTC m=+787.241681100" observedRunningTime="2026-01-23 08:08:33.819426562 +0000 UTC m=+788.299789958" watchObservedRunningTime="2026-01-23 08:08:33.834202307 +0000 UTC m=+788.314565703" Jan 23 08:08:36 crc kubenswrapper[4982]: I0123 08:08:36.112231 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-vfwzn" Jan 23 08:08:36 crc kubenswrapper[4982]: I0123 08:08:36.259935 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:36 crc kubenswrapper[4982]: I0123 08:08:36.260272 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:36 crc kubenswrapper[4982]: I0123 08:08:36.269685 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:36 crc kubenswrapper[4982]: I0123 08:08:36.825512 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b869c7dd5-2qhfp" Jan 23 08:08:36 crc kubenswrapper[4982]: I0123 08:08:36.892252 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qdpxs"] Jan 23 08:08:41 crc kubenswrapper[4982]: I0123 08:08:41.437840 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:08:41 crc kubenswrapper[4982]: I0123 08:08:41.438831 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:08:46 crc kubenswrapper[4982]: I0123 08:08:46.381301 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-lp5mm" Jan 23 08:09:01 crc kubenswrapper[4982]: I0123 08:09:01.955530 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qdpxs" podUID="4b63c011-53bb-4dce-8db4-849770a043ba" containerName="console" containerID="cri-o://80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131" gracePeriod=15 Jan 23 08:09:02 crc kubenswrapper[4982]: I0123 08:09:02.926432 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qdpxs_4b63c011-53bb-4dce-8db4-849770a043ba/console/0.log" Jan 23 08:09:02 crc kubenswrapper[4982]: I0123 08:09:02.926934 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.026540 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qdpxs_4b63c011-53bb-4dce-8db4-849770a043ba/console/0.log" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.026588 4982 generic.go:334] "Generic (PLEG): container finished" podID="4b63c011-53bb-4dce-8db4-849770a043ba" containerID="80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131" exitCode=2 Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.026651 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qdpxs" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.026667 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qdpxs" event={"ID":"4b63c011-53bb-4dce-8db4-849770a043ba","Type":"ContainerDied","Data":"80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131"} Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.026720 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qdpxs" event={"ID":"4b63c011-53bb-4dce-8db4-849770a043ba","Type":"ContainerDied","Data":"516f54b80305fc4767778a21b36735a669380fb5e697c3275999756499355cda"} Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.026762 4982 scope.go:117] "RemoveContainer" containerID="80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.045718 4982 scope.go:117] "RemoveContainer" containerID="80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131" Jan 23 08:09:03 crc kubenswrapper[4982]: E0123 08:09:03.046324 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131\": container with ID starting with 80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131 not found: ID does not exist" containerID="80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.046361 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131"} err="failed to get container status \"80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131\": rpc error: code = NotFound desc = could not find container \"80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131\": container with ID starting with 80484e1d9ec9b869f69f4850c282e29ed86087dfc4a7fceb9b4784f6e09c9131 not found: ID does not exist" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.064189 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-oauth-serving-cert\") pod \"4b63c011-53bb-4dce-8db4-849770a043ba\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.064248 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-service-ca\") pod \"4b63c011-53bb-4dce-8db4-849770a043ba\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.064272 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-oauth-config\") pod \"4b63c011-53bb-4dce-8db4-849770a043ba\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.064294 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-console-config\") pod \"4b63c011-53bb-4dce-8db4-849770a043ba\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.064330 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-serving-cert\") pod \"4b63c011-53bb-4dce-8db4-849770a043ba\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.064386 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-trusted-ca-bundle\") pod \"4b63c011-53bb-4dce-8db4-849770a043ba\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.064413 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjt8n\" (UniqueName: \"kubernetes.io/projected/4b63c011-53bb-4dce-8db4-849770a043ba-kube-api-access-tjt8n\") pod \"4b63c011-53bb-4dce-8db4-849770a043ba\" (UID: \"4b63c011-53bb-4dce-8db4-849770a043ba\") " Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.065106 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-service-ca" (OuterVolumeSpecName: "service-ca") pod "4b63c011-53bb-4dce-8db4-849770a043ba" (UID: "4b63c011-53bb-4dce-8db4-849770a043ba"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.065204 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-console-config" (OuterVolumeSpecName: "console-config") pod "4b63c011-53bb-4dce-8db4-849770a043ba" (UID: "4b63c011-53bb-4dce-8db4-849770a043ba"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.065555 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4b63c011-53bb-4dce-8db4-849770a043ba" (UID: "4b63c011-53bb-4dce-8db4-849770a043ba"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.065717 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4b63c011-53bb-4dce-8db4-849770a043ba" (UID: "4b63c011-53bb-4dce-8db4-849770a043ba"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.065978 4982 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.066307 4982 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.066319 4982 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.066330 4982 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b63c011-53bb-4dce-8db4-849770a043ba-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.071344 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4b63c011-53bb-4dce-8db4-849770a043ba" (UID: "4b63c011-53bb-4dce-8db4-849770a043ba"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.071507 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b63c011-53bb-4dce-8db4-849770a043ba-kube-api-access-tjt8n" (OuterVolumeSpecName: "kube-api-access-tjt8n") pod "4b63c011-53bb-4dce-8db4-849770a043ba" (UID: "4b63c011-53bb-4dce-8db4-849770a043ba"). InnerVolumeSpecName "kube-api-access-tjt8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.071911 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4b63c011-53bb-4dce-8db4-849770a043ba" (UID: "4b63c011-53bb-4dce-8db4-849770a043ba"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.167978 4982 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.168029 4982 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b63c011-53bb-4dce-8db4-849770a043ba-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.168042 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjt8n\" (UniqueName: \"kubernetes.io/projected/4b63c011-53bb-4dce-8db4-849770a043ba-kube-api-access-tjt8n\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.366140 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qdpxs"] Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.376867 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qdpxs"] Jan 23 08:09:03 crc kubenswrapper[4982]: I0123 08:09:03.837675 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b63c011-53bb-4dce-8db4-849770a043ba" path="/var/lib/kubelet/pods/4b63c011-53bb-4dce-8db4-849770a043ba/volumes" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.335611 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw"] Jan 23 08:09:05 crc kubenswrapper[4982]: E0123 08:09:05.335995 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="extract-utilities" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.336013 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="extract-utilities" Jan 23 08:09:05 crc kubenswrapper[4982]: E0123 08:09:05.336173 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b63c011-53bb-4dce-8db4-849770a043ba" containerName="console" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.336184 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b63c011-53bb-4dce-8db4-849770a043ba" containerName="console" Jan 23 08:09:05 crc kubenswrapper[4982]: E0123 08:09:05.336197 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="registry-server" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.336204 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="registry-server" Jan 23 08:09:05 crc kubenswrapper[4982]: E0123 08:09:05.336222 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="extract-content" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.336230 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="extract-content" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.336378 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b63c011-53bb-4dce-8db4-849770a043ba" containerName="console" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.336403 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf14c37-9ab3-4ed4-9527-42498fe7c358" containerName="registry-server" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.337559 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.339956 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.342496 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw"] Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.400642 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxwd\" (UniqueName: \"kubernetes.io/projected/4865d9eb-206e-4e2c-9cc9-343250fefe10-kube-api-access-fxxwd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.400727 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.400853 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.502408 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxxwd\" (UniqueName: \"kubernetes.io/projected/4865d9eb-206e-4e2c-9cc9-343250fefe10-kube-api-access-fxxwd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.502471 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.502520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.502944 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.503055 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.522462 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxxwd\" (UniqueName: \"kubernetes.io/projected/4865d9eb-206e-4e2c-9cc9-343250fefe10-kube-api-access-fxxwd\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:05 crc kubenswrapper[4982]: I0123 08:09:05.698786 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:06 crc kubenswrapper[4982]: I0123 08:09:06.113434 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw"] Jan 23 08:09:07 crc kubenswrapper[4982]: I0123 08:09:07.066997 4982 generic.go:334] "Generic (PLEG): container finished" podID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerID="e47d0b125b52eaeffca4b87e6a1424ca8e4f812bdc7d80cc844e38dbcfeb0d10" exitCode=0 Jan 23 08:09:07 crc kubenswrapper[4982]: I0123 08:09:07.067079 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" event={"ID":"4865d9eb-206e-4e2c-9cc9-343250fefe10","Type":"ContainerDied","Data":"e47d0b125b52eaeffca4b87e6a1424ca8e4f812bdc7d80cc844e38dbcfeb0d10"} Jan 23 08:09:07 crc kubenswrapper[4982]: I0123 08:09:07.067325 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" event={"ID":"4865d9eb-206e-4e2c-9cc9-343250fefe10","Type":"ContainerStarted","Data":"da6d272901baf6404e9f35de8f65d32564aba52a7c2b0a30f59e74713f4aadbf"} Jan 23 08:09:09 crc kubenswrapper[4982]: I0123 08:09:09.085898 4982 generic.go:334] "Generic (PLEG): container finished" podID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerID="bdd5b40fe9a3877371a90a5c40a2a4cea3eefe1c7edc8dde8271983fa47efa18" exitCode=0 Jan 23 08:09:09 crc kubenswrapper[4982]: I0123 08:09:09.086093 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" event={"ID":"4865d9eb-206e-4e2c-9cc9-343250fefe10","Type":"ContainerDied","Data":"bdd5b40fe9a3877371a90a5c40a2a4cea3eefe1c7edc8dde8271983fa47efa18"} Jan 23 08:09:10 crc kubenswrapper[4982]: I0123 08:09:10.095461 4982 generic.go:334] "Generic (PLEG): container finished" podID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerID="f4c541778c8a92fc3821c515a7fce3a44a970ac5764ef8ed3036997c39dff5bf" exitCode=0 Jan 23 08:09:10 crc kubenswrapper[4982]: I0123 08:09:10.095575 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" event={"ID":"4865d9eb-206e-4e2c-9cc9-343250fefe10","Type":"ContainerDied","Data":"f4c541778c8a92fc3821c515a7fce3a44a970ac5764ef8ed3036997c39dff5bf"} Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.342327 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.436389 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.436441 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.486960 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-bundle\") pod \"4865d9eb-206e-4e2c-9cc9-343250fefe10\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.487942 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-util\") pod \"4865d9eb-206e-4e2c-9cc9-343250fefe10\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.487961 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-bundle" (OuterVolumeSpecName: "bundle") pod "4865d9eb-206e-4e2c-9cc9-343250fefe10" (UID: "4865d9eb-206e-4e2c-9cc9-343250fefe10"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.488150 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxxwd\" (UniqueName: \"kubernetes.io/projected/4865d9eb-206e-4e2c-9cc9-343250fefe10-kube-api-access-fxxwd\") pod \"4865d9eb-206e-4e2c-9cc9-343250fefe10\" (UID: \"4865d9eb-206e-4e2c-9cc9-343250fefe10\") " Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.488512 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.493737 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4865d9eb-206e-4e2c-9cc9-343250fefe10-kube-api-access-fxxwd" (OuterVolumeSpecName: "kube-api-access-fxxwd") pod "4865d9eb-206e-4e2c-9cc9-343250fefe10" (UID: "4865d9eb-206e-4e2c-9cc9-343250fefe10"). InnerVolumeSpecName "kube-api-access-fxxwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.502185 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-util" (OuterVolumeSpecName: "util") pod "4865d9eb-206e-4e2c-9cc9-343250fefe10" (UID: "4865d9eb-206e-4e2c-9cc9-343250fefe10"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.589222 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4865d9eb-206e-4e2c-9cc9-343250fefe10-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:11 crc kubenswrapper[4982]: I0123 08:09:11.589266 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxxwd\" (UniqueName: \"kubernetes.io/projected/4865d9eb-206e-4e2c-9cc9-343250fefe10-kube-api-access-fxxwd\") on node \"crc\" DevicePath \"\"" Jan 23 08:09:12 crc kubenswrapper[4982]: I0123 08:09:12.112112 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" event={"ID":"4865d9eb-206e-4e2c-9cc9-343250fefe10","Type":"ContainerDied","Data":"da6d272901baf6404e9f35de8f65d32564aba52a7c2b0a30f59e74713f4aadbf"} Jan 23 08:09:12 crc kubenswrapper[4982]: I0123 08:09:12.112193 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da6d272901baf6404e9f35de8f65d32564aba52a7c2b0a30f59e74713f4aadbf" Jan 23 08:09:12 crc kubenswrapper[4982]: I0123 08:09:12.112137 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.631189 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5"] Jan 23 08:09:20 crc kubenswrapper[4982]: E0123 08:09:20.632477 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerName="pull" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.632497 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerName="pull" Jan 23 08:09:20 crc kubenswrapper[4982]: E0123 08:09:20.632512 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerName="extract" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.632519 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerName="extract" Jan 23 08:09:20 crc kubenswrapper[4982]: E0123 08:09:20.632528 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerName="util" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.632536 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerName="util" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.632708 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4865d9eb-206e-4e2c-9cc9-343250fefe10" containerName="extract" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.633307 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.636096 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.640491 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.642143 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.643577 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-w6zsf" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.643981 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.652088 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5"] Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.813981 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxftz\" (UniqueName: \"kubernetes.io/projected/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-kube-api-access-lxftz\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.814026 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-webhook-cert\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.814067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-apiservice-cert\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.915820 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxftz\" (UniqueName: \"kubernetes.io/projected/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-kube-api-access-lxftz\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.915876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-webhook-cert\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.915911 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-apiservice-cert\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.925585 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-webhook-cert\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.934290 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-apiservice-cert\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.947761 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxftz\" (UniqueName: \"kubernetes.io/projected/0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1-kube-api-access-lxftz\") pod \"metallb-operator-controller-manager-6d4d669f8c-c6qg5\" (UID: \"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1\") " pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:20 crc kubenswrapper[4982]: I0123 08:09:20.959109 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.108882 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh"] Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.109716 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.112013 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n7qbd" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.112826 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.121805 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-apiservice-cert\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.121873 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-webhook-cert\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.121905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzsn\" (UniqueName: \"kubernetes.io/projected/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-kube-api-access-htzsn\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.123181 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.131849 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh"] Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.229042 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htzsn\" (UniqueName: \"kubernetes.io/projected/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-kube-api-access-htzsn\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.229616 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-apiservice-cert\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.229721 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-webhook-cert\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.236364 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-webhook-cert\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.238834 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-apiservice-cert\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.263534 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htzsn\" (UniqueName: \"kubernetes.io/projected/0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7-kube-api-access-htzsn\") pod \"metallb-operator-webhook-server-7b77b75c7d-w7fhh\" (UID: \"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7\") " pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.399718 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5"] Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.432466 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:21 crc kubenswrapper[4982]: I0123 08:09:21.690112 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh"] Jan 23 08:09:21 crc kubenswrapper[4982]: W0123 08:09:21.705941 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fc52ef0_2aba_4ecf_89fb_5fdc9cbb98b7.slice/crio-f39563d6d93e57913f32115bef4e6dbd7069346d3c6bbe712650673f3c4f91bc WatchSource:0}: Error finding container f39563d6d93e57913f32115bef4e6dbd7069346d3c6bbe712650673f3c4f91bc: Status 404 returned error can't find the container with id f39563d6d93e57913f32115bef4e6dbd7069346d3c6bbe712650673f3c4f91bc Jan 23 08:09:22 crc kubenswrapper[4982]: I0123 08:09:22.187727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" event={"ID":"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7","Type":"ContainerStarted","Data":"f39563d6d93e57913f32115bef4e6dbd7069346d3c6bbe712650673f3c4f91bc"} Jan 23 08:09:22 crc kubenswrapper[4982]: I0123 08:09:22.189122 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" event={"ID":"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1","Type":"ContainerStarted","Data":"4dfe9d479c01f7ac3dabb95b58ff64393b82628f9aee34fde8c8bf1398b754cc"} Jan 23 08:09:31 crc kubenswrapper[4982]: I0123 08:09:31.254051 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" event={"ID":"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1","Type":"ContainerStarted","Data":"773a39599329a6de9ea462c8d0aaa8571a9eae39cd277d4ad6bd3a46cb1dbf16"} Jan 23 08:09:31 crc kubenswrapper[4982]: I0123 08:09:31.255702 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:09:31 crc kubenswrapper[4982]: I0123 08:09:31.257341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" event={"ID":"0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7","Type":"ContainerStarted","Data":"ecfe191bf95bd594dec2e680371c13dfa7ca3ae1c4efdbb7013c93fcad755033"} Jan 23 08:09:31 crc kubenswrapper[4982]: I0123 08:09:31.257848 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:31 crc kubenswrapper[4982]: I0123 08:09:31.281808 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" podStartSLOduration=2.105377342 podStartE2EDuration="11.28178814s" podCreationTimestamp="2026-01-23 08:09:20 +0000 UTC" firstStartedPulling="2026-01-23 08:09:21.406975142 +0000 UTC m=+835.887338528" lastFinishedPulling="2026-01-23 08:09:30.58338594 +0000 UTC m=+845.063749326" observedRunningTime="2026-01-23 08:09:31.276097608 +0000 UTC m=+845.756461004" watchObservedRunningTime="2026-01-23 08:09:31.28178814 +0000 UTC m=+845.762151526" Jan 23 08:09:31 crc kubenswrapper[4982]: I0123 08:09:31.301725 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" podStartSLOduration=1.401271721 podStartE2EDuration="10.301705313s" podCreationTimestamp="2026-01-23 08:09:21 +0000 UTC" firstStartedPulling="2026-01-23 08:09:21.708258505 +0000 UTC m=+836.188621891" lastFinishedPulling="2026-01-23 08:09:30.608692097 +0000 UTC m=+845.089055483" observedRunningTime="2026-01-23 08:09:31.298889448 +0000 UTC m=+845.779252844" watchObservedRunningTime="2026-01-23 08:09:31.301705313 +0000 UTC m=+845.782068699" Jan 23 08:09:41 crc kubenswrapper[4982]: I0123 08:09:41.435613 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:09:41 crc kubenswrapper[4982]: I0123 08:09:41.436154 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:09:41 crc kubenswrapper[4982]: I0123 08:09:41.436194 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:09:41 crc kubenswrapper[4982]: I0123 08:09:41.436772 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"35ca4511cd3232156669a737407761628df1a5436b3552a0057dc7fc63d6e9b8"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:09:41 crc kubenswrapper[4982]: I0123 08:09:41.436829 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://35ca4511cd3232156669a737407761628df1a5436b3552a0057dc7fc63d6e9b8" gracePeriod=600 Jan 23 08:09:41 crc kubenswrapper[4982]: I0123 08:09:41.441849 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7b77b75c7d-w7fhh" Jan 23 08:09:42 crc kubenswrapper[4982]: I0123 08:09:42.370517 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="35ca4511cd3232156669a737407761628df1a5436b3552a0057dc7fc63d6e9b8" exitCode=0 Jan 23 08:09:42 crc kubenswrapper[4982]: I0123 08:09:42.370585 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"35ca4511cd3232156669a737407761628df1a5436b3552a0057dc7fc63d6e9b8"} Jan 23 08:09:42 crc kubenswrapper[4982]: I0123 08:09:42.371221 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"1d24a630527fc6e0922fa5b0c623c16897e3532c564083acc123c1c83ab59cae"} Jan 23 08:09:42 crc kubenswrapper[4982]: I0123 08:09:42.371244 4982 scope.go:117] "RemoveContainer" containerID="ee99b5f60e92b405232080c1205e0aafb07531f737cb696db384041538e4ab08" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.708681 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j99c9"] Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.710501 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.729833 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j99c9"] Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.839859 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhzp8\" (UniqueName: \"kubernetes.io/projected/de8550f2-6618-4da5-9acf-698ac73de792-kube-api-access-hhzp8\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.839917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-utilities\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.839966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-catalog-content\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.941612 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhzp8\" (UniqueName: \"kubernetes.io/projected/de8550f2-6618-4da5-9acf-698ac73de792-kube-api-access-hhzp8\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.941680 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-utilities\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.941725 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-catalog-content\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.942198 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-utilities\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.942247 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-catalog-content\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:51 crc kubenswrapper[4982]: I0123 08:09:51.969373 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhzp8\" (UniqueName: \"kubernetes.io/projected/de8550f2-6618-4da5-9acf-698ac73de792-kube-api-access-hhzp8\") pod \"redhat-marketplace-j99c9\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:52 crc kubenswrapper[4982]: I0123 08:09:52.029539 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:09:52 crc kubenswrapper[4982]: I0123 08:09:52.301767 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j99c9"] Jan 23 08:09:52 crc kubenswrapper[4982]: I0123 08:09:52.461772 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j99c9" event={"ID":"de8550f2-6618-4da5-9acf-698ac73de792","Type":"ContainerStarted","Data":"180960365bd3436920548134cd27314d1f66b7fc4d011b2d8dbaf97b90898845"} Jan 23 08:09:53 crc kubenswrapper[4982]: I0123 08:09:53.472007 4982 generic.go:334] "Generic (PLEG): container finished" podID="de8550f2-6618-4da5-9acf-698ac73de792" containerID="6e149ed9ed1c937b721626efde662c571c99fc9c9678db75a5acaf521523a7d1" exitCode=0 Jan 23 08:09:53 crc kubenswrapper[4982]: I0123 08:09:53.472077 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j99c9" event={"ID":"de8550f2-6618-4da5-9acf-698ac73de792","Type":"ContainerDied","Data":"6e149ed9ed1c937b721626efde662c571c99fc9c9678db75a5acaf521523a7d1"} Jan 23 08:09:54 crc kubenswrapper[4982]: I0123 08:09:54.482061 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j99c9" event={"ID":"de8550f2-6618-4da5-9acf-698ac73de792","Type":"ContainerStarted","Data":"a72082a0b0059c571902cf73f5dee79b8f2285eb5d032a47705197bc8e900ea2"} Jan 23 08:09:55 crc kubenswrapper[4982]: I0123 08:09:55.490025 4982 generic.go:334] "Generic (PLEG): container finished" podID="de8550f2-6618-4da5-9acf-698ac73de792" containerID="a72082a0b0059c571902cf73f5dee79b8f2285eb5d032a47705197bc8e900ea2" exitCode=0 Jan 23 08:09:55 crc kubenswrapper[4982]: I0123 08:09:55.490133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j99c9" event={"ID":"de8550f2-6618-4da5-9acf-698ac73de792","Type":"ContainerDied","Data":"a72082a0b0059c571902cf73f5dee79b8f2285eb5d032a47705197bc8e900ea2"} Jan 23 08:09:56 crc kubenswrapper[4982]: I0123 08:09:56.500561 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j99c9" event={"ID":"de8550f2-6618-4da5-9acf-698ac73de792","Type":"ContainerStarted","Data":"f254c16f299903b5db63387554f33d6c4f4784f4bec46b1f372ee7b12f11cd4b"} Jan 23 08:09:56 crc kubenswrapper[4982]: I0123 08:09:56.518823 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j99c9" podStartSLOduration=3.123704901 podStartE2EDuration="5.518799331s" podCreationTimestamp="2026-01-23 08:09:51 +0000 UTC" firstStartedPulling="2026-01-23 08:09:53.474120597 +0000 UTC m=+867.954484003" lastFinishedPulling="2026-01-23 08:09:55.869215047 +0000 UTC m=+870.349578433" observedRunningTime="2026-01-23 08:09:56.517519107 +0000 UTC m=+870.997882503" watchObservedRunningTime="2026-01-23 08:09:56.518799331 +0000 UTC m=+870.999162717" Jan 23 08:10:00 crc kubenswrapper[4982]: I0123 08:10:00.963136 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.651356 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kd7nd"] Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.655423 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.657441 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.657698 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tkghq" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.657704 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.666712 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf"] Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.667836 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.670999 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.674825 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf"] Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.772143 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-b98fj"] Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.773427 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.776835 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.777186 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.777514 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.777724 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-6ctnv" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.790156 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-dj7gd"] Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792264 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-frr-sockets\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792324 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5740314-fbfc-46d5-a79c-7c5168604561-metrics-certs\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792373 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02c1e896-b983-4902-b815-b7f6c74aa735-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jj6qf\" (UID: \"02c1e896-b983-4902-b815-b7f6c74aa735\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792399 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-frr-conf\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792429 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgv6d\" (UniqueName: \"kubernetes.io/projected/02c1e896-b983-4902-b815-b7f6c74aa735-kube-api-access-vgv6d\") pod \"frr-k8s-webhook-server-7df86c4f6c-jj6qf\" (UID: \"02c1e896-b983-4902-b815-b7f6c74aa735\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792454 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792559 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-metrics\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792608 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-reloader\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792721 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k7br\" (UniqueName: \"kubernetes.io/projected/d5740314-fbfc-46d5-a79c-7c5168604561-kube-api-access-7k7br\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.792778 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d5740314-fbfc-46d5-a79c-7c5168604561-frr-startup\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.793832 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.825272 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dj7gd"] Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.893926 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-metrics\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.893973 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-reloader\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894019 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-metrics-certs\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/40a8d301-7add-45a8-8379-cfe8c438a5ce-metallb-excludel2\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894058 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k7br\" (UniqueName: \"kubernetes.io/projected/d5740314-fbfc-46d5-a79c-7c5168604561-kube-api-access-7k7br\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894085 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d5740314-fbfc-46d5-a79c-7c5168604561-frr-startup\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894106 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-metrics-certs\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894168 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-cert\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894186 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-frr-sockets\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894250 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5740314-fbfc-46d5-a79c-7c5168604561-metrics-certs\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894277 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02c1e896-b983-4902-b815-b7f6c74aa735-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jj6qf\" (UID: \"02c1e896-b983-4902-b815-b7f6c74aa735\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894293 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-frr-conf\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894315 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxkgv\" (UniqueName: \"kubernetes.io/projected/40a8d301-7add-45a8-8379-cfe8c438a5ce-kube-api-access-jxkgv\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894343 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894359 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgv6d\" (UniqueName: \"kubernetes.io/projected/02c1e896-b983-4902-b815-b7f6c74aa735-kube-api-access-vgv6d\") pod \"frr-k8s-webhook-server-7df86c4f6c-jj6qf\" (UID: \"02c1e896-b983-4902-b815-b7f6c74aa735\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.894406 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnf7d\" (UniqueName: \"kubernetes.io/projected/f717bfd9-1d07-4b04-b34e-9d19ffea876f-kube-api-access-tnf7d\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.895914 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-metrics\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.896230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-frr-sockets\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.896485 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-reloader\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.897010 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d5740314-fbfc-46d5-a79c-7c5168604561-frr-conf\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.897705 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d5740314-fbfc-46d5-a79c-7c5168604561-frr-startup\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.911604 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5740314-fbfc-46d5-a79c-7c5168604561-metrics-certs\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.911611 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02c1e896-b983-4902-b815-b7f6c74aa735-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jj6qf\" (UID: \"02c1e896-b983-4902-b815-b7f6c74aa735\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.914660 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgv6d\" (UniqueName: \"kubernetes.io/projected/02c1e896-b983-4902-b815-b7f6c74aa735-kube-api-access-vgv6d\") pod \"frr-k8s-webhook-server-7df86c4f6c-jj6qf\" (UID: \"02c1e896-b983-4902-b815-b7f6c74aa735\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.920404 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k7br\" (UniqueName: \"kubernetes.io/projected/d5740314-fbfc-46d5-a79c-7c5168604561-kube-api-access-7k7br\") pod \"frr-k8s-kd7nd\" (UID: \"d5740314-fbfc-46d5-a79c-7c5168604561\") " pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.980154 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.994753 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.995577 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-metrics-certs\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.995650 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-cert\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.995699 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxkgv\" (UniqueName: \"kubernetes.io/projected/40a8d301-7add-45a8-8379-cfe8c438a5ce-kube-api-access-jxkgv\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.995718 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.995737 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnf7d\" (UniqueName: \"kubernetes.io/projected/f717bfd9-1d07-4b04-b34e-9d19ffea876f-kube-api-access-tnf7d\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.995768 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-metrics-certs\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.995786 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/40a8d301-7add-45a8-8379-cfe8c438a5ce-metallb-excludel2\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.996471 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/40a8d301-7add-45a8-8379-cfe8c438a5ce-metallb-excludel2\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:01 crc kubenswrapper[4982]: E0123 08:10:01.996935 4982 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 08:10:01 crc kubenswrapper[4982]: E0123 08:10:01.997019 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist podName:40a8d301-7add-45a8-8379-cfe8c438a5ce nodeName:}" failed. No retries permitted until 2026-01-23 08:10:02.496994134 +0000 UTC m=+876.977357570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist") pod "speaker-b98fj" (UID: "40a8d301-7add-45a8-8379-cfe8c438a5ce") : secret "metallb-memberlist" not found Jan 23 08:10:01 crc kubenswrapper[4982]: E0123 08:10:01.997702 4982 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 23 08:10:01 crc kubenswrapper[4982]: E0123 08:10:01.997734 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-metrics-certs podName:f717bfd9-1d07-4b04-b34e-9d19ffea876f nodeName:}" failed. No retries permitted until 2026-01-23 08:10:02.497726933 +0000 UTC m=+876.978090319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-metrics-certs") pod "controller-6968d8fdc4-dj7gd" (UID: "f717bfd9-1d07-4b04-b34e-9d19ffea876f") : secret "controller-certs-secret" not found Jan 23 08:10:01 crc kubenswrapper[4982]: I0123 08:10:01.999590 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-metrics-certs\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.000658 4982 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.012000 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-cert\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.020515 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxkgv\" (UniqueName: \"kubernetes.io/projected/40a8d301-7add-45a8-8379-cfe8c438a5ce-kube-api-access-jxkgv\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.020799 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnf7d\" (UniqueName: \"kubernetes.io/projected/f717bfd9-1d07-4b04-b34e-9d19ffea876f-kube-api-access-tnf7d\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.030709 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.030766 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.079780 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.463251 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf"] Jan 23 08:10:02 crc kubenswrapper[4982]: W0123 08:10:02.471327 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c1e896_b983_4902_b815_b7f6c74aa735.slice/crio-151ab61440f0af8767bfd305b76c12d20ef7e38c54a33145f8be9c306bd24680 WatchSource:0}: Error finding container 151ab61440f0af8767bfd305b76c12d20ef7e38c54a33145f8be9c306bd24680: Status 404 returned error can't find the container with id 151ab61440f0af8767bfd305b76c12d20ef7e38c54a33145f8be9c306bd24680 Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.503931 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.504004 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-metrics-certs\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:02 crc kubenswrapper[4982]: E0123 08:10:02.504182 4982 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 08:10:02 crc kubenswrapper[4982]: E0123 08:10:02.504298 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist podName:40a8d301-7add-45a8-8379-cfe8c438a5ce nodeName:}" failed. No retries permitted until 2026-01-23 08:10:03.50427171 +0000 UTC m=+877.984635106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist") pod "speaker-b98fj" (UID: "40a8d301-7add-45a8-8379-cfe8c438a5ce") : secret "metallb-memberlist" not found Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.512493 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f717bfd9-1d07-4b04-b34e-9d19ffea876f-metrics-certs\") pod \"controller-6968d8fdc4-dj7gd\" (UID: \"f717bfd9-1d07-4b04-b34e-9d19ffea876f\") " pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.552738 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" event={"ID":"02c1e896-b983-4902-b815-b7f6c74aa735","Type":"ContainerStarted","Data":"151ab61440f0af8767bfd305b76c12d20ef7e38c54a33145f8be9c306bd24680"} Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.561953 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerStarted","Data":"f9d0c1f35725a94de607daf80f59017bfc1c408afbc9181551f223a59d363389"} Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.640957 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.697991 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j99c9"] Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.707205 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:02 crc kubenswrapper[4982]: I0123 08:10:02.971231 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dj7gd"] Jan 23 08:10:03 crc kubenswrapper[4982]: I0123 08:10:03.552759 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:03 crc kubenswrapper[4982]: I0123 08:10:03.563365 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/40a8d301-7add-45a8-8379-cfe8c438a5ce-memberlist\") pod \"speaker-b98fj\" (UID: \"40a8d301-7add-45a8-8379-cfe8c438a5ce\") " pod="metallb-system/speaker-b98fj" Jan 23 08:10:03 crc kubenswrapper[4982]: I0123 08:10:03.580922 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dj7gd" event={"ID":"f717bfd9-1d07-4b04-b34e-9d19ffea876f","Type":"ContainerStarted","Data":"afcf85c3cd14f81c0b6c7f63c4e4825db5eb0723a9aa2e510cab8d4f6abc015f"} Jan 23 08:10:03 crc kubenswrapper[4982]: I0123 08:10:03.580996 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dj7gd" event={"ID":"f717bfd9-1d07-4b04-b34e-9d19ffea876f","Type":"ContainerStarted","Data":"9d1cac8f43e600dbfc20659eb7d68e02caf64cd9656bafc73e9d85902d837797"} Jan 23 08:10:03 crc kubenswrapper[4982]: I0123 08:10:03.581012 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dj7gd" event={"ID":"f717bfd9-1d07-4b04-b34e-9d19ffea876f","Type":"ContainerStarted","Data":"387307f62e825b03a29258b72f5def8cb39e650af5678d87ff3cac811a1cab49"} Jan 23 08:10:03 crc kubenswrapper[4982]: I0123 08:10:03.589565 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-b98fj" Jan 23 08:10:03 crc kubenswrapper[4982]: I0123 08:10:03.601764 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-dj7gd" podStartSLOduration=2.601740651 podStartE2EDuration="2.601740651s" podCreationTimestamp="2026-01-23 08:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:10:03.599439209 +0000 UTC m=+878.079802595" watchObservedRunningTime="2026-01-23 08:10:03.601740651 +0000 UTC m=+878.082104047" Jan 23 08:10:04 crc kubenswrapper[4982]: I0123 08:10:04.594425 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b98fj" event={"ID":"40a8d301-7add-45a8-8379-cfe8c438a5ce","Type":"ContainerStarted","Data":"d7666fed8457f2d56b497cc0fab16ed1744cce22b99abeda9b0cd76f6c053af0"} Jan 23 08:10:04 crc kubenswrapper[4982]: I0123 08:10:04.594552 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j99c9" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="registry-server" containerID="cri-o://f254c16f299903b5db63387554f33d6c4f4784f4bec46b1f372ee7b12f11cd4b" gracePeriod=2 Jan 23 08:10:04 crc kubenswrapper[4982]: I0123 08:10:04.595094 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:04 crc kubenswrapper[4982]: I0123 08:10:04.595197 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b98fj" event={"ID":"40a8d301-7add-45a8-8379-cfe8c438a5ce","Type":"ContainerStarted","Data":"e3477417182dfbe48b168dfe610d559715efde8a70405605e783c9f6e5b41e71"} Jan 23 08:10:04 crc kubenswrapper[4982]: I0123 08:10:04.595214 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-b98fj" event={"ID":"40a8d301-7add-45a8-8379-cfe8c438a5ce","Type":"ContainerStarted","Data":"c93e02f2fa771c891e132780a8c11e82b017a8cb95e2e38bc0abadcc7b603a87"} Jan 23 08:10:04 crc kubenswrapper[4982]: I0123 08:10:04.595332 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-b98fj" Jan 23 08:10:04 crc kubenswrapper[4982]: I0123 08:10:04.628505 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-b98fj" podStartSLOduration=3.62848728 podStartE2EDuration="3.62848728s" podCreationTimestamp="2026-01-23 08:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:10:04.623389533 +0000 UTC m=+879.103752929" watchObservedRunningTime="2026-01-23 08:10:04.62848728 +0000 UTC m=+879.108850666" Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.607727 4982 generic.go:334] "Generic (PLEG): container finished" podID="de8550f2-6618-4da5-9acf-698ac73de792" containerID="f254c16f299903b5db63387554f33d6c4f4784f4bec46b1f372ee7b12f11cd4b" exitCode=0 Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.607797 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j99c9" event={"ID":"de8550f2-6618-4da5-9acf-698ac73de792","Type":"ContainerDied","Data":"f254c16f299903b5db63387554f33d6c4f4784f4bec46b1f372ee7b12f11cd4b"} Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.667259 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.791690 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhzp8\" (UniqueName: \"kubernetes.io/projected/de8550f2-6618-4da5-9acf-698ac73de792-kube-api-access-hhzp8\") pod \"de8550f2-6618-4da5-9acf-698ac73de792\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.791983 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-utilities\") pod \"de8550f2-6618-4da5-9acf-698ac73de792\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.792742 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-catalog-content\") pod \"de8550f2-6618-4da5-9acf-698ac73de792\" (UID: \"de8550f2-6618-4da5-9acf-698ac73de792\") " Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.794810 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-utilities" (OuterVolumeSpecName: "utilities") pod "de8550f2-6618-4da5-9acf-698ac73de792" (UID: "de8550f2-6618-4da5-9acf-698ac73de792"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.807005 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8550f2-6618-4da5-9acf-698ac73de792-kube-api-access-hhzp8" (OuterVolumeSpecName: "kube-api-access-hhzp8") pod "de8550f2-6618-4da5-9acf-698ac73de792" (UID: "de8550f2-6618-4da5-9acf-698ac73de792"). InnerVolumeSpecName "kube-api-access-hhzp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.832670 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de8550f2-6618-4da5-9acf-698ac73de792" (UID: "de8550f2-6618-4da5-9acf-698ac73de792"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.895006 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.895043 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhzp8\" (UniqueName: \"kubernetes.io/projected/de8550f2-6618-4da5-9acf-698ac73de792-kube-api-access-hhzp8\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:05 crc kubenswrapper[4982]: I0123 08:10:05.895056 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8550f2-6618-4da5-9acf-698ac73de792-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:06 crc kubenswrapper[4982]: I0123 08:10:06.617084 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j99c9" event={"ID":"de8550f2-6618-4da5-9acf-698ac73de792","Type":"ContainerDied","Data":"180960365bd3436920548134cd27314d1f66b7fc4d011b2d8dbaf97b90898845"} Jan 23 08:10:06 crc kubenswrapper[4982]: I0123 08:10:06.617140 4982 scope.go:117] "RemoveContainer" containerID="f254c16f299903b5db63387554f33d6c4f4784f4bec46b1f372ee7b12f11cd4b" Jan 23 08:10:06 crc kubenswrapper[4982]: I0123 08:10:06.617246 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j99c9" Jan 23 08:10:06 crc kubenswrapper[4982]: I0123 08:10:06.646584 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j99c9"] Jan 23 08:10:06 crc kubenswrapper[4982]: I0123 08:10:06.655892 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j99c9"] Jan 23 08:10:06 crc kubenswrapper[4982]: I0123 08:10:06.662909 4982 scope.go:117] "RemoveContainer" containerID="a72082a0b0059c571902cf73f5dee79b8f2285eb5d032a47705197bc8e900ea2" Jan 23 08:10:06 crc kubenswrapper[4982]: I0123 08:10:06.700098 4982 scope.go:117] "RemoveContainer" containerID="6e149ed9ed1c937b721626efde662c571c99fc9c9678db75a5acaf521523a7d1" Jan 23 08:10:07 crc kubenswrapper[4982]: I0123 08:10:07.837569 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8550f2-6618-4da5-9acf-698ac73de792" path="/var/lib/kubelet/pods/de8550f2-6618-4da5-9acf-698ac73de792/volumes" Jan 23 08:10:11 crc kubenswrapper[4982]: I0123 08:10:11.674729 4982 generic.go:334] "Generic (PLEG): container finished" podID="d5740314-fbfc-46d5-a79c-7c5168604561" containerID="0ce5d7f822a61b7d4ae94ae479ccdf96c3f349b40c394126dc516e4da341c6ec" exitCode=0 Jan 23 08:10:11 crc kubenswrapper[4982]: I0123 08:10:11.674831 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerDied","Data":"0ce5d7f822a61b7d4ae94ae479ccdf96c3f349b40c394126dc516e4da341c6ec"} Jan 23 08:10:11 crc kubenswrapper[4982]: I0123 08:10:11.677613 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" event={"ID":"02c1e896-b983-4902-b815-b7f6c74aa735","Type":"ContainerStarted","Data":"628b9f7e5d0ca17dbb88716221dbabcd15c011f366606a6c4ea963c979c9bc50"} Jan 23 08:10:11 crc kubenswrapper[4982]: I0123 08:10:11.677852 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:11 crc kubenswrapper[4982]: I0123 08:10:11.739759 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" podStartSLOduration=2.255748132 podStartE2EDuration="10.739740056s" podCreationTimestamp="2026-01-23 08:10:01 +0000 UTC" firstStartedPulling="2026-01-23 08:10:02.473918538 +0000 UTC m=+876.954281924" lastFinishedPulling="2026-01-23 08:10:10.957910472 +0000 UTC m=+885.438273848" observedRunningTime="2026-01-23 08:10:11.735698948 +0000 UTC m=+886.216062334" watchObservedRunningTime="2026-01-23 08:10:11.739740056 +0000 UTC m=+886.220103442" Jan 23 08:10:12 crc kubenswrapper[4982]: I0123 08:10:12.686460 4982 generic.go:334] "Generic (PLEG): container finished" podID="d5740314-fbfc-46d5-a79c-7c5168604561" containerID="399b52cef2a20efd341cb0eb1b6a51cef1a1d1f98094d551dfade15bb8207fea" exitCode=0 Jan 23 08:10:12 crc kubenswrapper[4982]: I0123 08:10:12.686525 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerDied","Data":"399b52cef2a20efd341cb0eb1b6a51cef1a1d1f98094d551dfade15bb8207fea"} Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.004583 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-thkb9"] Jan 23 08:10:13 crc kubenswrapper[4982]: E0123 08:10:13.004946 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="extract-content" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.004993 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="extract-content" Jan 23 08:10:13 crc kubenswrapper[4982]: E0123 08:10:13.005006 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="registry-server" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.005014 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="registry-server" Jan 23 08:10:13 crc kubenswrapper[4982]: E0123 08:10:13.005038 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="extract-utilities" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.005046 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="extract-utilities" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.005213 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8550f2-6618-4da5-9acf-698ac73de792" containerName="registry-server" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.006383 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.020616 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thkb9"] Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.124438 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-utilities\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.124488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-catalog-content\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.124559 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvp88\" (UniqueName: \"kubernetes.io/projected/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-kube-api-access-vvp88\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.226093 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-utilities\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.226166 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-catalog-content\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.226257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvp88\" (UniqueName: \"kubernetes.io/projected/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-kube-api-access-vvp88\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.226736 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-utilities\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.226795 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-catalog-content\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.246396 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvp88\" (UniqueName: \"kubernetes.io/projected/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-kube-api-access-vvp88\") pod \"community-operators-thkb9\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.330409 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.595455 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-b98fj" Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.696730 4982 generic.go:334] "Generic (PLEG): container finished" podID="d5740314-fbfc-46d5-a79c-7c5168604561" containerID="8d34b88ee8031f703163d84c6524dde729fda39f919937a40b13b9b7acf7cb14" exitCode=0 Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.696793 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerDied","Data":"8d34b88ee8031f703163d84c6524dde729fda39f919937a40b13b9b7acf7cb14"} Jan 23 08:10:13 crc kubenswrapper[4982]: W0123 08:10:13.854479 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8373dc6a_45df_4fd7_9c22_8ea0acec8e54.slice/crio-3f963c52737319ec316ca329e6283d90b05d70454d93a03a814110e05fc95e79 WatchSource:0}: Error finding container 3f963c52737319ec316ca329e6283d90b05d70454d93a03a814110e05fc95e79: Status 404 returned error can't find the container with id 3f963c52737319ec316ca329e6283d90b05d70454d93a03a814110e05fc95e79 Jan 23 08:10:13 crc kubenswrapper[4982]: I0123 08:10:13.859216 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-thkb9"] Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.709365 4982 generic.go:334] "Generic (PLEG): container finished" podID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerID="ffcaaadfbda9a40d2c20626f7e34fb5e4681f82dab8219e5e61c39d43e8e7195" exitCode=0 Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.709446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thkb9" event={"ID":"8373dc6a-45df-4fd7-9c22-8ea0acec8e54","Type":"ContainerDied","Data":"ffcaaadfbda9a40d2c20626f7e34fb5e4681f82dab8219e5e61c39d43e8e7195"} Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.709475 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thkb9" event={"ID":"8373dc6a-45df-4fd7-9c22-8ea0acec8e54","Type":"ContainerStarted","Data":"3f963c52737319ec316ca329e6283d90b05d70454d93a03a814110e05fc95e79"} Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.728275 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerStarted","Data":"284176b0aacb263b4e92d1ba84ee032c1691da55b3f6046acec46cbdb4b70a9a"} Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.728770 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerStarted","Data":"e742e771f550f291a1321c5cb9e5eb29c19c10362682e3db9cc8f2f3cd23049a"} Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.728784 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerStarted","Data":"33bc34f283730daddd631df03194d5e93d123aac5cb2e9c679406712e15e68d9"} Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.728821 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerStarted","Data":"7fa070fc861565356744e1271a1282b44585380188c6a18a0a7df62bc6c302b8"} Jan 23 08:10:14 crc kubenswrapper[4982]: I0123 08:10:14.728830 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerStarted","Data":"7f24a862d1d67b27f776e250d59ed1ea96790faaee3b53f7dcd0e94d86ccd986"} Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.225655 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw"] Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.227278 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.229508 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.236910 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw"] Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.364257 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.364313 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwbq\" (UniqueName: \"kubernetes.io/projected/6f10ca80-1061-44bf-933f-068fd39b2e3d-kube-api-access-sfwbq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.364500 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.466161 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.466229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwbq\" (UniqueName: \"kubernetes.io/projected/6f10ca80-1061-44bf-933f-068fd39b2e3d-kube-api-access-sfwbq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.466290 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.466595 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.466724 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.486553 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwbq\" (UniqueName: \"kubernetes.io/projected/6f10ca80-1061-44bf-933f-068fd39b2e3d-kube-api-access-sfwbq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.578834 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.740541 4982 generic.go:334] "Generic (PLEG): container finished" podID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerID="8bd732675351f2e87edc916ee7863f1761fb66279e5d82bdaa02c06d021998ba" exitCode=0 Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.740587 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thkb9" event={"ID":"8373dc6a-45df-4fd7-9c22-8ea0acec8e54","Type":"ContainerDied","Data":"8bd732675351f2e87edc916ee7863f1761fb66279e5d82bdaa02c06d021998ba"} Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.748677 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kd7nd" event={"ID":"d5740314-fbfc-46d5-a79c-7c5168604561","Type":"ContainerStarted","Data":"df616f88a5f6b95bc7e72408038759b5f1e518f576779dddc557ecdb6b57b7c5"} Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.749085 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.792781 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kd7nd" podStartSLOduration=6.071598034 podStartE2EDuration="14.792758816s" podCreationTimestamp="2026-01-23 08:10:01 +0000 UTC" firstStartedPulling="2026-01-23 08:10:02.227130633 +0000 UTC m=+876.707494019" lastFinishedPulling="2026-01-23 08:10:10.948291415 +0000 UTC m=+885.428654801" observedRunningTime="2026-01-23 08:10:15.783860148 +0000 UTC m=+890.264223534" watchObservedRunningTime="2026-01-23 08:10:15.792758816 +0000 UTC m=+890.273122202" Jan 23 08:10:15 crc kubenswrapper[4982]: I0123 08:10:15.806977 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw"] Jan 23 08:10:16 crc kubenswrapper[4982]: I0123 08:10:16.763176 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerID="6df11ca2ed5950c7c018f5eaff3c983b75251c307869c39b33ca473972ebe8fe" exitCode=0 Jan 23 08:10:16 crc kubenswrapper[4982]: I0123 08:10:16.765337 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" event={"ID":"6f10ca80-1061-44bf-933f-068fd39b2e3d","Type":"ContainerDied","Data":"6df11ca2ed5950c7c018f5eaff3c983b75251c307869c39b33ca473972ebe8fe"} Jan 23 08:10:16 crc kubenswrapper[4982]: I0123 08:10:16.765403 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" event={"ID":"6f10ca80-1061-44bf-933f-068fd39b2e3d","Type":"ContainerStarted","Data":"5202fd5d7198f406681279215ec2709b30815b33d0e162b42ce2e50ccf3f80c4"} Jan 23 08:10:16 crc kubenswrapper[4982]: I0123 08:10:16.981303 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:17 crc kubenswrapper[4982]: I0123 08:10:17.035793 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:17 crc kubenswrapper[4982]: I0123 08:10:17.777761 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thkb9" event={"ID":"8373dc6a-45df-4fd7-9c22-8ea0acec8e54","Type":"ContainerStarted","Data":"61ac6586f23495e05d82cf8c7fc2b7aa035b9a30049a230032ce193ef85182df"} Jan 23 08:10:17 crc kubenswrapper[4982]: I0123 08:10:17.824006 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-thkb9" podStartSLOduration=3.896169503 podStartE2EDuration="5.823976417s" podCreationTimestamp="2026-01-23 08:10:12 +0000 UTC" firstStartedPulling="2026-01-23 08:10:14.711871438 +0000 UTC m=+889.192234824" lastFinishedPulling="2026-01-23 08:10:16.639678362 +0000 UTC m=+891.120041738" observedRunningTime="2026-01-23 08:10:17.82073155 +0000 UTC m=+892.301094966" watchObservedRunningTime="2026-01-23 08:10:17.823976417 +0000 UTC m=+892.304339803" Jan 23 08:10:20 crc kubenswrapper[4982]: I0123 08:10:20.808818 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerID="232c375b82edb6942dcdbe4ccd63a4aeeb0bd2cb58b6c7b58089c6c858f41a77" exitCode=0 Jan 23 08:10:20 crc kubenswrapper[4982]: I0123 08:10:20.808896 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" event={"ID":"6f10ca80-1061-44bf-933f-068fd39b2e3d","Type":"ContainerDied","Data":"232c375b82edb6942dcdbe4ccd63a4aeeb0bd2cb58b6c7b58089c6c858f41a77"} Jan 23 08:10:21 crc kubenswrapper[4982]: I0123 08:10:21.820680 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerID="412f66ee83c3c12782e55b7eb648c05db0997136988001203a3dd60ba51712c6" exitCode=0 Jan 23 08:10:21 crc kubenswrapper[4982]: I0123 08:10:21.820715 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" event={"ID":"6f10ca80-1061-44bf-933f-068fd39b2e3d","Type":"ContainerDied","Data":"412f66ee83c3c12782e55b7eb648c05db0997136988001203a3dd60ba51712c6"} Jan 23 08:10:22 crc kubenswrapper[4982]: I0123 08:10:22.000851 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jj6qf" Jan 23 08:10:22 crc kubenswrapper[4982]: I0123 08:10:22.715897 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-dj7gd" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.330755 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.331085 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.391083 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.494328 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.603841 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-util\") pod \"6f10ca80-1061-44bf-933f-068fd39b2e3d\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.603931 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-bundle\") pod \"6f10ca80-1061-44bf-933f-068fd39b2e3d\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.603970 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwbq\" (UniqueName: \"kubernetes.io/projected/6f10ca80-1061-44bf-933f-068fd39b2e3d-kube-api-access-sfwbq\") pod \"6f10ca80-1061-44bf-933f-068fd39b2e3d\" (UID: \"6f10ca80-1061-44bf-933f-068fd39b2e3d\") " Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.605058 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-bundle" (OuterVolumeSpecName: "bundle") pod "6f10ca80-1061-44bf-933f-068fd39b2e3d" (UID: "6f10ca80-1061-44bf-933f-068fd39b2e3d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.610524 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f10ca80-1061-44bf-933f-068fd39b2e3d-kube-api-access-sfwbq" (OuterVolumeSpecName: "kube-api-access-sfwbq") pod "6f10ca80-1061-44bf-933f-068fd39b2e3d" (UID: "6f10ca80-1061-44bf-933f-068fd39b2e3d"). InnerVolumeSpecName "kube-api-access-sfwbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.614495 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-util" (OuterVolumeSpecName: "util") pod "6f10ca80-1061-44bf-933f-068fd39b2e3d" (UID: "6f10ca80-1061-44bf-933f-068fd39b2e3d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.705793 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.705838 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f10ca80-1061-44bf-933f-068fd39b2e3d-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.705851 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfwbq\" (UniqueName: \"kubernetes.io/projected/6f10ca80-1061-44bf-933f-068fd39b2e3d-kube-api-access-sfwbq\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.864564 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.864601 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw" event={"ID":"6f10ca80-1061-44bf-933f-068fd39b2e3d","Type":"ContainerDied","Data":"5202fd5d7198f406681279215ec2709b30815b33d0e162b42ce2e50ccf3f80c4"} Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.864679 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5202fd5d7198f406681279215ec2709b30815b33d0e162b42ce2e50ccf3f80c4" Jan 23 08:10:23 crc kubenswrapper[4982]: I0123 08:10:23.987253 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:27 crc kubenswrapper[4982]: I0123 08:10:27.002607 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-thkb9"] Jan 23 08:10:27 crc kubenswrapper[4982]: I0123 08:10:27.003066 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-thkb9" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="registry-server" containerID="cri-o://61ac6586f23495e05d82cf8c7fc2b7aa035b9a30049a230032ce193ef85182df" gracePeriod=2 Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.250757 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq"] Jan 23 08:10:28 crc kubenswrapper[4982]: E0123 08:10:28.251328 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerName="util" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.251341 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerName="util" Jan 23 08:10:28 crc kubenswrapper[4982]: E0123 08:10:28.251359 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerName="extract" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.251365 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerName="extract" Jan 23 08:10:28 crc kubenswrapper[4982]: E0123 08:10:28.251379 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerName="pull" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.251386 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerName="pull" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.251553 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f10ca80-1061-44bf-933f-068fd39b2e3d" containerName="extract" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.252098 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.256233 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.256336 4982 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-ckb6b" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.259734 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.272008 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq"] Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.384893 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkn24\" (UniqueName: \"kubernetes.io/projected/2cc207a7-4fa1-4b7a-8103-e6d088ba9137-kube-api-access-rkn24\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vfbhq\" (UID: \"2cc207a7-4fa1-4b7a-8103-e6d088ba9137\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.385304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc207a7-4fa1-4b7a-8103-e6d088ba9137-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vfbhq\" (UID: \"2cc207a7-4fa1-4b7a-8103-e6d088ba9137\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.487085 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc207a7-4fa1-4b7a-8103-e6d088ba9137-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vfbhq\" (UID: \"2cc207a7-4fa1-4b7a-8103-e6d088ba9137\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.487171 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkn24\" (UniqueName: \"kubernetes.io/projected/2cc207a7-4fa1-4b7a-8103-e6d088ba9137-kube-api-access-rkn24\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vfbhq\" (UID: \"2cc207a7-4fa1-4b7a-8103-e6d088ba9137\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.487562 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2cc207a7-4fa1-4b7a-8103-e6d088ba9137-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vfbhq\" (UID: \"2cc207a7-4fa1-4b7a-8103-e6d088ba9137\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.512791 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkn24\" (UniqueName: \"kubernetes.io/projected/2cc207a7-4fa1-4b7a-8103-e6d088ba9137-kube-api-access-rkn24\") pod \"cert-manager-operator-controller-manager-64cf6dff88-vfbhq\" (UID: \"2cc207a7-4fa1-4b7a-8103-e6d088ba9137\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.572940 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.905349 4982 generic.go:334] "Generic (PLEG): container finished" podID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerID="61ac6586f23495e05d82cf8c7fc2b7aa035b9a30049a230032ce193ef85182df" exitCode=0 Jan 23 08:10:28 crc kubenswrapper[4982]: I0123 08:10:28.905436 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thkb9" event={"ID":"8373dc6a-45df-4fd7-9c22-8ea0acec8e54","Type":"ContainerDied","Data":"61ac6586f23495e05d82cf8c7fc2b7aa035b9a30049a230032ce193ef85182df"} Jan 23 08:10:29 crc kubenswrapper[4982]: I0123 08:10:29.053153 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq"] Jan 23 08:10:29 crc kubenswrapper[4982]: W0123 08:10:29.059182 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cc207a7_4fa1_4b7a_8103_e6d088ba9137.slice/crio-511a6a6e7ef468c2e406d32be7a8cec32b7f0522017da38cce829d45a1207b7a WatchSource:0}: Error finding container 511a6a6e7ef468c2e406d32be7a8cec32b7f0522017da38cce829d45a1207b7a: Status 404 returned error can't find the container with id 511a6a6e7ef468c2e406d32be7a8cec32b7f0522017da38cce829d45a1207b7a Jan 23 08:10:29 crc kubenswrapper[4982]: I0123 08:10:29.913803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" event={"ID":"2cc207a7-4fa1-4b7a-8103-e6d088ba9137","Type":"ContainerStarted","Data":"511a6a6e7ef468c2e406d32be7a8cec32b7f0522017da38cce829d45a1207b7a"} Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.640453 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.740119 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvp88\" (UniqueName: \"kubernetes.io/projected/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-kube-api-access-vvp88\") pod \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.740270 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-catalog-content\") pod \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.740424 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-utilities\") pod \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\" (UID: \"8373dc6a-45df-4fd7-9c22-8ea0acec8e54\") " Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.741868 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-utilities" (OuterVolumeSpecName: "utilities") pod "8373dc6a-45df-4fd7-9c22-8ea0acec8e54" (UID: "8373dc6a-45df-4fd7-9c22-8ea0acec8e54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.747594 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-kube-api-access-vvp88" (OuterVolumeSpecName: "kube-api-access-vvp88") pod "8373dc6a-45df-4fd7-9c22-8ea0acec8e54" (UID: "8373dc6a-45df-4fd7-9c22-8ea0acec8e54"). InnerVolumeSpecName "kube-api-access-vvp88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.804576 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8373dc6a-45df-4fd7-9c22-8ea0acec8e54" (UID: "8373dc6a-45df-4fd7-9c22-8ea0acec8e54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.842249 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.842333 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvp88\" (UniqueName: \"kubernetes.io/projected/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-kube-api-access-vvp88\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.842348 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8373dc6a-45df-4fd7-9c22-8ea0acec8e54-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.931552 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-thkb9" event={"ID":"8373dc6a-45df-4fd7-9c22-8ea0acec8e54","Type":"ContainerDied","Data":"3f963c52737319ec316ca329e6283d90b05d70454d93a03a814110e05fc95e79"} Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.931666 4982 scope.go:117] "RemoveContainer" containerID="61ac6586f23495e05d82cf8c7fc2b7aa035b9a30049a230032ce193ef85182df" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.931968 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-thkb9" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.991372 4982 scope.go:117] "RemoveContainer" containerID="8bd732675351f2e87edc916ee7863f1761fb66279e5d82bdaa02c06d021998ba" Jan 23 08:10:30 crc kubenswrapper[4982]: I0123 08:10:30.993615 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-thkb9"] Jan 23 08:10:31 crc kubenswrapper[4982]: I0123 08:10:31.001090 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-thkb9"] Jan 23 08:10:31 crc kubenswrapper[4982]: I0123 08:10:31.030739 4982 scope.go:117] "RemoveContainer" containerID="ffcaaadfbda9a40d2c20626f7e34fb5e4681f82dab8219e5e61c39d43e8e7195" Jan 23 08:10:31 crc kubenswrapper[4982]: I0123 08:10:31.837409 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" path="/var/lib/kubelet/pods/8373dc6a-45df-4fd7-9c22-8ea0acec8e54/volumes" Jan 23 08:10:31 crc kubenswrapper[4982]: I0123 08:10:31.983556 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kd7nd" Jan 23 08:10:39 crc kubenswrapper[4982]: I0123 08:10:39.009499 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" event={"ID":"2cc207a7-4fa1-4b7a-8103-e6d088ba9137","Type":"ContainerStarted","Data":"a431b00d967a02907c7751a9d89da6f45863adc14c18043232e0f60c4e4e2ddf"} Jan 23 08:10:39 crc kubenswrapper[4982]: I0123 08:10:39.042498 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-vfbhq" podStartSLOduration=1.653097042 podStartE2EDuration="11.042480155s" podCreationTimestamp="2026-01-23 08:10:28 +0000 UTC" firstStartedPulling="2026-01-23 08:10:29.062911691 +0000 UTC m=+903.543275077" lastFinishedPulling="2026-01-23 08:10:38.452294804 +0000 UTC m=+912.932658190" observedRunningTime="2026-01-23 08:10:39.025117581 +0000 UTC m=+913.505480997" watchObservedRunningTime="2026-01-23 08:10:39.042480155 +0000 UTC m=+913.522843541" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.401230 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g7rhm"] Jan 23 08:10:42 crc kubenswrapper[4982]: E0123 08:10:42.402246 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="extract-content" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.402266 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="extract-content" Jan 23 08:10:42 crc kubenswrapper[4982]: E0123 08:10:42.402293 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="registry-server" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.402300 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="registry-server" Jan 23 08:10:42 crc kubenswrapper[4982]: E0123 08:10:42.402325 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="extract-utilities" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.402333 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="extract-utilities" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.402500 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8373dc6a-45df-4fd7-9c22-8ea0acec8e54" containerName="registry-server" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.403702 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.421051 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7rhm"] Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.493300 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfp8x\" (UniqueName: \"kubernetes.io/projected/8e94b834-cea9-4788-9505-35685d666af5-kube-api-access-pfp8x\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.493425 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-catalog-content\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.493456 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-utilities\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.594918 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-catalog-content\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.594961 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-utilities\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.595023 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfp8x\" (UniqueName: \"kubernetes.io/projected/8e94b834-cea9-4788-9505-35685d666af5-kube-api-access-pfp8x\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.595429 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-catalog-content\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.595477 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-utilities\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.616392 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfp8x\" (UniqueName: \"kubernetes.io/projected/8e94b834-cea9-4788-9505-35685d666af5-kube-api-access-pfp8x\") pod \"certified-operators-g7rhm\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:42 crc kubenswrapper[4982]: I0123 08:10:42.730492 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:10:43 crc kubenswrapper[4982]: I0123 08:10:43.306056 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7rhm"] Jan 23 08:10:44 crc kubenswrapper[4982]: I0123 08:10:44.044787 4982 generic.go:334] "Generic (PLEG): container finished" podID="8e94b834-cea9-4788-9505-35685d666af5" containerID="335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed" exitCode=0 Jan 23 08:10:44 crc kubenswrapper[4982]: I0123 08:10:44.044938 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7rhm" event={"ID":"8e94b834-cea9-4788-9505-35685d666af5","Type":"ContainerDied","Data":"335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed"} Jan 23 08:10:44 crc kubenswrapper[4982]: I0123 08:10:44.045169 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7rhm" event={"ID":"8e94b834-cea9-4788-9505-35685d666af5","Type":"ContainerStarted","Data":"ad7402c9f96c28711b75160fc1b88d716f1208515a25d3db2ec3f17510279177"} Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.206436 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-ss8hx"] Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.207384 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.210995 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.211077 4982 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bjl28" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.211251 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.221865 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-ss8hx"] Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.344530 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65f6cb92-3c6d-4e3c-b7af-734426f97f89-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-ss8hx\" (UID: \"65f6cb92-3c6d-4e3c-b7af-734426f97f89\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.344618 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftk6v\" (UniqueName: \"kubernetes.io/projected/65f6cb92-3c6d-4e3c-b7af-734426f97f89-kube-api-access-ftk6v\") pod \"cert-manager-webhook-f4fb5df64-ss8hx\" (UID: \"65f6cb92-3c6d-4e3c-b7af-734426f97f89\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.445892 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65f6cb92-3c6d-4e3c-b7af-734426f97f89-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-ss8hx\" (UID: \"65f6cb92-3c6d-4e3c-b7af-734426f97f89\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.445957 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftk6v\" (UniqueName: \"kubernetes.io/projected/65f6cb92-3c6d-4e3c-b7af-734426f97f89-kube-api-access-ftk6v\") pod \"cert-manager-webhook-f4fb5df64-ss8hx\" (UID: \"65f6cb92-3c6d-4e3c-b7af-734426f97f89\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.464957 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/65f6cb92-3c6d-4e3c-b7af-734426f97f89-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-ss8hx\" (UID: \"65f6cb92-3c6d-4e3c-b7af-734426f97f89\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.467591 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftk6v\" (UniqueName: \"kubernetes.io/projected/65f6cb92-3c6d-4e3c-b7af-734426f97f89-kube-api-access-ftk6v\") pod \"cert-manager-webhook-f4fb5df64-ss8hx\" (UID: \"65f6cb92-3c6d-4e3c-b7af-734426f97f89\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.531093 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.754285 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4"] Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.756141 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.759372 4982 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-jjwvt" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.767144 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4"] Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.954803 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7wzp4\" (UID: \"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:45 crc kubenswrapper[4982]: I0123 08:10:45.955125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpcbk\" (UniqueName: \"kubernetes.io/projected/d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a-kube-api-access-zpcbk\") pod \"cert-manager-cainjector-855d9ccff4-7wzp4\" (UID: \"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.031278 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-ss8hx"] Jan 23 08:10:46 crc kubenswrapper[4982]: W0123 08:10:46.033757 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65f6cb92_3c6d_4e3c_b7af_734426f97f89.slice/crio-37e3eddc360c47626dec81794f18cf6539f00ce007099080556b46147eef55f6 WatchSource:0}: Error finding container 37e3eddc360c47626dec81794f18cf6539f00ce007099080556b46147eef55f6: Status 404 returned error can't find the container with id 37e3eddc360c47626dec81794f18cf6539f00ce007099080556b46147eef55f6 Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.056341 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7wzp4\" (UID: \"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.056391 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpcbk\" (UniqueName: \"kubernetes.io/projected/d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a-kube-api-access-zpcbk\") pod \"cert-manager-cainjector-855d9ccff4-7wzp4\" (UID: \"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.065819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" event={"ID":"65f6cb92-3c6d-4e3c-b7af-734426f97f89","Type":"ContainerStarted","Data":"37e3eddc360c47626dec81794f18cf6539f00ce007099080556b46147eef55f6"} Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.074076 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpcbk\" (UniqueName: \"kubernetes.io/projected/d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a-kube-api-access-zpcbk\") pod \"cert-manager-cainjector-855d9ccff4-7wzp4\" (UID: \"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.077860 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7wzp4\" (UID: \"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.088655 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" Jan 23 08:10:46 crc kubenswrapper[4982]: I0123 08:10:46.316122 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4"] Jan 23 08:10:46 crc kubenswrapper[4982]: W0123 08:10:46.325191 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd98fe1e0_fe9c_4bc5_8de4_eed0d7181d6a.slice/crio-bbebb2009e907c8292682460c0347bf957e99c13eba1d6d4531135cc197c6fd5 WatchSource:0}: Error finding container bbebb2009e907c8292682460c0347bf957e99c13eba1d6d4531135cc197c6fd5: Status 404 returned error can't find the container with id bbebb2009e907c8292682460c0347bf957e99c13eba1d6d4531135cc197c6fd5 Jan 23 08:10:47 crc kubenswrapper[4982]: I0123 08:10:47.081359 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" event={"ID":"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a","Type":"ContainerStarted","Data":"bbebb2009e907c8292682460c0347bf957e99c13eba1d6d4531135cc197c6fd5"} Jan 23 08:10:49 crc kubenswrapper[4982]: I0123 08:10:49.099438 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7rhm" event={"ID":"8e94b834-cea9-4788-9505-35685d666af5","Type":"ContainerStarted","Data":"6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76"} Jan 23 08:10:50 crc kubenswrapper[4982]: I0123 08:10:50.112957 4982 generic.go:334] "Generic (PLEG): container finished" podID="8e94b834-cea9-4788-9505-35685d666af5" containerID="6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76" exitCode=0 Jan 23 08:10:50 crc kubenswrapper[4982]: I0123 08:10:50.113018 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7rhm" event={"ID":"8e94b834-cea9-4788-9505-35685d666af5","Type":"ContainerDied","Data":"6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76"} Jan 23 08:11:00 crc kubenswrapper[4982]: E0123 08:11:00.926749 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Jan 23 08:11:00 crc kubenswrapper[4982]: E0123 08:11:00.927351 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftk6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-ss8hx_cert-manager(65f6cb92-3c6d-4e3c-b7af-734426f97f89): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:11:00 crc kubenswrapper[4982]: E0123 08:11:00.928541 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" Jan 23 08:11:00 crc kubenswrapper[4982]: E0123 08:11:00.936736 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Jan 23 08:11:00 crc kubenswrapper[4982]: E0123 08:11:00.937127 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/cainjector/cainjector],Args:[--leader-election-namespace=kube-system --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zpcbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-855d9ccff4-7wzp4_cert-manager(d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 08:11:00 crc kubenswrapper[4982]: E0123 08:11:00.938395 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" podUID="d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a" Jan 23 08:11:01 crc kubenswrapper[4982]: I0123 08:11:01.194872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7rhm" event={"ID":"8e94b834-cea9-4788-9505-35685d666af5","Type":"ContainerStarted","Data":"c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07"} Jan 23 08:11:01 crc kubenswrapper[4982]: E0123 08:11:01.196524 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" Jan 23 08:11:01 crc kubenswrapper[4982]: E0123 08:11:01.196583 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" podUID="d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a" Jan 23 08:11:01 crc kubenswrapper[4982]: I0123 08:11:01.232174 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g7rhm" podStartSLOduration=2.331845781 podStartE2EDuration="19.232158621s" podCreationTimestamp="2026-01-23 08:10:42 +0000 UTC" firstStartedPulling="2026-01-23 08:10:44.046461881 +0000 UTC m=+918.526825267" lastFinishedPulling="2026-01-23 08:11:00.946774721 +0000 UTC m=+935.427138107" observedRunningTime="2026-01-23 08:11:01.228159445 +0000 UTC m=+935.708522831" watchObservedRunningTime="2026-01-23 08:11:01.232158621 +0000 UTC m=+935.712522007" Jan 23 08:11:01 crc kubenswrapper[4982]: I0123 08:11:01.968254 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-2m4pf"] Jan 23 08:11:01 crc kubenswrapper[4982]: I0123 08:11:01.970043 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:01 crc kubenswrapper[4982]: I0123 08:11:01.976099 4982 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-wmvvr" Jan 23 08:11:01 crc kubenswrapper[4982]: I0123 08:11:01.980738 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-2m4pf"] Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:01.999611 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/082e5308-ee9a-4c17-a4a6-637a6e17b684-bound-sa-token\") pod \"cert-manager-86cb77c54b-2m4pf\" (UID: \"082e5308-ee9a-4c17-a4a6-637a6e17b684\") " pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.001194 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cv8t\" (UniqueName: \"kubernetes.io/projected/082e5308-ee9a-4c17-a4a6-637a6e17b684-kube-api-access-8cv8t\") pod \"cert-manager-86cb77c54b-2m4pf\" (UID: \"082e5308-ee9a-4c17-a4a6-637a6e17b684\") " pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.102932 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/082e5308-ee9a-4c17-a4a6-637a6e17b684-bound-sa-token\") pod \"cert-manager-86cb77c54b-2m4pf\" (UID: \"082e5308-ee9a-4c17-a4a6-637a6e17b684\") " pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.103084 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cv8t\" (UniqueName: \"kubernetes.io/projected/082e5308-ee9a-4c17-a4a6-637a6e17b684-kube-api-access-8cv8t\") pod \"cert-manager-86cb77c54b-2m4pf\" (UID: \"082e5308-ee9a-4c17-a4a6-637a6e17b684\") " pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.130597 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cv8t\" (UniqueName: \"kubernetes.io/projected/082e5308-ee9a-4c17-a4a6-637a6e17b684-kube-api-access-8cv8t\") pod \"cert-manager-86cb77c54b-2m4pf\" (UID: \"082e5308-ee9a-4c17-a4a6-637a6e17b684\") " pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.130795 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/082e5308-ee9a-4c17-a4a6-637a6e17b684-bound-sa-token\") pod \"cert-manager-86cb77c54b-2m4pf\" (UID: \"082e5308-ee9a-4c17-a4a6-637a6e17b684\") " pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.345056 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-2m4pf" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.732838 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.733061 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.782155 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:11:02 crc kubenswrapper[4982]: I0123 08:11:02.802151 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-2m4pf"] Jan 23 08:11:02 crc kubenswrapper[4982]: W0123 08:11:02.810974 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod082e5308_ee9a_4c17_a4a6_637a6e17b684.slice/crio-43e1b952943ade2be5f57e91ad6bf5a8ed2c913f9a9a9fd74ef3f8824bbc9a6d WatchSource:0}: Error finding container 43e1b952943ade2be5f57e91ad6bf5a8ed2c913f9a9a9fd74ef3f8824bbc9a6d: Status 404 returned error can't find the container with id 43e1b952943ade2be5f57e91ad6bf5a8ed2c913f9a9a9fd74ef3f8824bbc9a6d Jan 23 08:11:03 crc kubenswrapper[4982]: I0123 08:11:03.212972 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-2m4pf" event={"ID":"082e5308-ee9a-4c17-a4a6-637a6e17b684","Type":"ContainerStarted","Data":"43e1b952943ade2be5f57e91ad6bf5a8ed2c913f9a9a9fd74ef3f8824bbc9a6d"} Jan 23 08:11:04 crc kubenswrapper[4982]: I0123 08:11:04.220230 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-2m4pf" event={"ID":"082e5308-ee9a-4c17-a4a6-637a6e17b684","Type":"ContainerStarted","Data":"26aae1f4ff332a026667c75c93e6ecfcb8d48678aee94b914990e03bcaaf1863"} Jan 23 08:11:04 crc kubenswrapper[4982]: I0123 08:11:04.242920 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-2m4pf" podStartSLOduration=2.447306752 podStartE2EDuration="3.242897704s" podCreationTimestamp="2026-01-23 08:11:01 +0000 UTC" firstStartedPulling="2026-01-23 08:11:02.815815596 +0000 UTC m=+937.296178982" lastFinishedPulling="2026-01-23 08:11:03.611406558 +0000 UTC m=+938.091769934" observedRunningTime="2026-01-23 08:11:04.235170857 +0000 UTC m=+938.715534243" watchObservedRunningTime="2026-01-23 08:11:04.242897704 +0000 UTC m=+938.723261130" Jan 23 08:11:12 crc kubenswrapper[4982]: I0123 08:11:12.789484 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:11:14 crc kubenswrapper[4982]: I0123 08:11:14.780838 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7rhm"] Jan 23 08:11:14 crc kubenswrapper[4982]: I0123 08:11:14.781340 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g7rhm" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="registry-server" containerID="cri-o://c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07" gracePeriod=2 Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.184070 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.301333 4982 generic.go:334] "Generic (PLEG): container finished" podID="8e94b834-cea9-4788-9505-35685d666af5" containerID="c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07" exitCode=0 Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.301440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7rhm" event={"ID":"8e94b834-cea9-4788-9505-35685d666af5","Type":"ContainerDied","Data":"c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07"} Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.301473 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7rhm" event={"ID":"8e94b834-cea9-4788-9505-35685d666af5","Type":"ContainerDied","Data":"ad7402c9f96c28711b75160fc1b88d716f1208515a25d3db2ec3f17510279177"} Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.301495 4982 scope.go:117] "RemoveContainer" containerID="c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.301493 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7rhm" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.303342 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" event={"ID":"65f6cb92-3c6d-4e3c-b7af-734426f97f89","Type":"ContainerStarted","Data":"dec01aca13b7a244c5bea166e194f909879c00d5dbe2f404805df45e3d117b66"} Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.303485 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.303931 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfp8x\" (UniqueName: \"kubernetes.io/projected/8e94b834-cea9-4788-9505-35685d666af5-kube-api-access-pfp8x\") pod \"8e94b834-cea9-4788-9505-35685d666af5\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.303997 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-utilities\") pod \"8e94b834-cea9-4788-9505-35685d666af5\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.304423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-catalog-content\") pod \"8e94b834-cea9-4788-9505-35685d666af5\" (UID: \"8e94b834-cea9-4788-9505-35685d666af5\") " Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.304877 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-utilities" (OuterVolumeSpecName: "utilities") pod "8e94b834-cea9-4788-9505-35685d666af5" (UID: "8e94b834-cea9-4788-9505-35685d666af5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.305218 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" event={"ID":"d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a","Type":"ContainerStarted","Data":"2cbc35a3d5105808c094c7900b389762061a4966e1df97d6fbc6e3101c878fac"} Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.311278 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e94b834-cea9-4788-9505-35685d666af5-kube-api-access-pfp8x" (OuterVolumeSpecName: "kube-api-access-pfp8x") pod "8e94b834-cea9-4788-9505-35685d666af5" (UID: "8e94b834-cea9-4788-9505-35685d666af5"). InnerVolumeSpecName "kube-api-access-pfp8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.331863 4982 scope.go:117] "RemoveContainer" containerID="6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.339222 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7wzp4" podStartSLOduration=-9223372006.515575 podStartE2EDuration="30.339199515s" podCreationTimestamp="2026-01-23 08:10:45 +0000 UTC" firstStartedPulling="2026-01-23 08:10:46.327803 +0000 UTC m=+920.808166386" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:11:15.332021333 +0000 UTC m=+949.812384719" watchObservedRunningTime="2026-01-23 08:11:15.339199515 +0000 UTC m=+949.819562931" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.367184 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podStartSLOduration=-9223372006.487633 podStartE2EDuration="30.367142332s" podCreationTimestamp="2026-01-23 08:10:45 +0000 UTC" firstStartedPulling="2026-01-23 08:10:46.037779176 +0000 UTC m=+920.518142562" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:11:15.352985674 +0000 UTC m=+949.833349070" watchObservedRunningTime="2026-01-23 08:11:15.367142332 +0000 UTC m=+949.847505718" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.372420 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e94b834-cea9-4788-9505-35685d666af5" (UID: "8e94b834-cea9-4788-9505-35685d666af5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.381278 4982 scope.go:117] "RemoveContainer" containerID="335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.400768 4982 scope.go:117] "RemoveContainer" containerID="c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07" Jan 23 08:11:15 crc kubenswrapper[4982]: E0123 08:11:15.401099 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07\": container with ID starting with c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07 not found: ID does not exist" containerID="c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.401130 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07"} err="failed to get container status \"c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07\": rpc error: code = NotFound desc = could not find container \"c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07\": container with ID starting with c74b1adf96819d56601cfe5b4f789b34f734e4fce6dc71340683ac4ca4a18d07 not found: ID does not exist" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.401152 4982 scope.go:117] "RemoveContainer" containerID="6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76" Jan 23 08:11:15 crc kubenswrapper[4982]: E0123 08:11:15.401617 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76\": container with ID starting with 6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76 not found: ID does not exist" containerID="6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.401701 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76"} err="failed to get container status \"6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76\": rpc error: code = NotFound desc = could not find container \"6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76\": container with ID starting with 6839c66f177a14a5c8a65ca0ba26245d851e60ebc2ac05b3a11c29369a094f76 not found: ID does not exist" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.401735 4982 scope.go:117] "RemoveContainer" containerID="335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed" Jan 23 08:11:15 crc kubenswrapper[4982]: E0123 08:11:15.402029 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed\": container with ID starting with 335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed not found: ID does not exist" containerID="335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.402053 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed"} err="failed to get container status \"335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed\": rpc error: code = NotFound desc = could not find container \"335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed\": container with ID starting with 335fc8caca3149871e23d75a63c1de71d49a30e52fbcdb42be6aa67ab235caed not found: ID does not exist" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.405985 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.406011 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfp8x\" (UniqueName: \"kubernetes.io/projected/8e94b834-cea9-4788-9505-35685d666af5-kube-api-access-pfp8x\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.406023 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e94b834-cea9-4788-9505-35685d666af5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.641551 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7rhm"] Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.648934 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g7rhm"] Jan 23 08:11:15 crc kubenswrapper[4982]: I0123 08:11:15.836084 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e94b834-cea9-4788-9505-35685d666af5" path="/var/lib/kubelet/pods/8e94b834-cea9-4788-9505-35685d666af5/volumes" Jan 23 08:11:20 crc kubenswrapper[4982]: I0123 08:11:20.535236 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.395529 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f77kp"] Jan 23 08:11:27 crc kubenswrapper[4982]: E0123 08:11:27.395859 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="registry-server" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.395877 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="registry-server" Jan 23 08:11:27 crc kubenswrapper[4982]: E0123 08:11:27.395896 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="extract-content" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.395904 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="extract-content" Jan 23 08:11:27 crc kubenswrapper[4982]: E0123 08:11:27.395928 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="extract-utilities" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.395936 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="extract-utilities" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.396134 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e94b834-cea9-4788-9505-35685d666af5" containerName="registry-server" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.396718 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f77kp" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.399724 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.400187 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.409743 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-klqvq" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.417427 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f77kp"] Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.597101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8nz\" (UniqueName: \"kubernetes.io/projected/2259c9fb-de4b-4bea-a29b-18796e8cad74-kube-api-access-7x8nz\") pod \"openstack-operator-index-f77kp\" (UID: \"2259c9fb-de4b-4bea-a29b-18796e8cad74\") " pod="openstack-operators/openstack-operator-index-f77kp" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.699315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8nz\" (UniqueName: \"kubernetes.io/projected/2259c9fb-de4b-4bea-a29b-18796e8cad74-kube-api-access-7x8nz\") pod \"openstack-operator-index-f77kp\" (UID: \"2259c9fb-de4b-4bea-a29b-18796e8cad74\") " pod="openstack-operators/openstack-operator-index-f77kp" Jan 23 08:11:27 crc kubenswrapper[4982]: I0123 08:11:27.721698 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8nz\" (UniqueName: \"kubernetes.io/projected/2259c9fb-de4b-4bea-a29b-18796e8cad74-kube-api-access-7x8nz\") pod \"openstack-operator-index-f77kp\" (UID: \"2259c9fb-de4b-4bea-a29b-18796e8cad74\") " pod="openstack-operators/openstack-operator-index-f77kp" Jan 23 08:11:28 crc kubenswrapper[4982]: I0123 08:11:28.017424 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f77kp" Jan 23 08:11:28 crc kubenswrapper[4982]: I0123 08:11:28.237047 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f77kp"] Jan 23 08:11:28 crc kubenswrapper[4982]: I0123 08:11:28.447846 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f77kp" event={"ID":"2259c9fb-de4b-4bea-a29b-18796e8cad74","Type":"ContainerStarted","Data":"af27485a08dc687f016be8a1601eac78c8c5224525ea3f59078223c9a5d652ca"} Jan 23 08:11:29 crc kubenswrapper[4982]: I0123 08:11:29.460568 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f77kp" event={"ID":"2259c9fb-de4b-4bea-a29b-18796e8cad74","Type":"ContainerStarted","Data":"1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d"} Jan 23 08:11:29 crc kubenswrapper[4982]: I0123 08:11:29.478291 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f77kp" podStartSLOduration=1.680746492 podStartE2EDuration="2.478271467s" podCreationTimestamp="2026-01-23 08:11:27 +0000 UTC" firstStartedPulling="2026-01-23 08:11:28.244733525 +0000 UTC m=+962.725096911" lastFinishedPulling="2026-01-23 08:11:29.0422585 +0000 UTC m=+963.522621886" observedRunningTime="2026-01-23 08:11:29.475663017 +0000 UTC m=+963.956026573" watchObservedRunningTime="2026-01-23 08:11:29.478271467 +0000 UTC m=+963.958634873" Jan 23 08:11:32 crc kubenswrapper[4982]: I0123 08:11:32.605820 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f77kp"] Jan 23 08:11:32 crc kubenswrapper[4982]: I0123 08:11:32.606769 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-f77kp" podUID="2259c9fb-de4b-4bea-a29b-18796e8cad74" containerName="registry-server" containerID="cri-o://1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d" gracePeriod=2 Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.033083 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f77kp" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.191914 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x8nz\" (UniqueName: \"kubernetes.io/projected/2259c9fb-de4b-4bea-a29b-18796e8cad74-kube-api-access-7x8nz\") pod \"2259c9fb-de4b-4bea-a29b-18796e8cad74\" (UID: \"2259c9fb-de4b-4bea-a29b-18796e8cad74\") " Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.198815 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2259c9fb-de4b-4bea-a29b-18796e8cad74-kube-api-access-7x8nz" (OuterVolumeSpecName: "kube-api-access-7x8nz") pod "2259c9fb-de4b-4bea-a29b-18796e8cad74" (UID: "2259c9fb-de4b-4bea-a29b-18796e8cad74"). InnerVolumeSpecName "kube-api-access-7x8nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.203867 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-s7lls"] Jan 23 08:11:33 crc kubenswrapper[4982]: E0123 08:11:33.204571 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2259c9fb-de4b-4bea-a29b-18796e8cad74" containerName="registry-server" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.204596 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2259c9fb-de4b-4bea-a29b-18796e8cad74" containerName="registry-server" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.205016 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2259c9fb-de4b-4bea-a29b-18796e8cad74" containerName="registry-server" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.205866 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.217909 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s7lls"] Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.294138 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x8nz\" (UniqueName: \"kubernetes.io/projected/2259c9fb-de4b-4bea-a29b-18796e8cad74-kube-api-access-7x8nz\") on node \"crc\" DevicePath \"\"" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.395703 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwrb6\" (UniqueName: \"kubernetes.io/projected/b74d47b4-62e8-4dd7-b89f-4a8177287c3a-kube-api-access-bwrb6\") pod \"openstack-operator-index-s7lls\" (UID: \"b74d47b4-62e8-4dd7-b89f-4a8177287c3a\") " pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.493812 4982 generic.go:334] "Generic (PLEG): container finished" podID="2259c9fb-de4b-4bea-a29b-18796e8cad74" containerID="1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d" exitCode=0 Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.493859 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f77kp" event={"ID":"2259c9fb-de4b-4bea-a29b-18796e8cad74","Type":"ContainerDied","Data":"1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d"} Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.493896 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f77kp" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.493919 4982 scope.go:117] "RemoveContainer" containerID="1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.493906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f77kp" event={"ID":"2259c9fb-de4b-4bea-a29b-18796e8cad74","Type":"ContainerDied","Data":"af27485a08dc687f016be8a1601eac78c8c5224525ea3f59078223c9a5d652ca"} Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.497160 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwrb6\" (UniqueName: \"kubernetes.io/projected/b74d47b4-62e8-4dd7-b89f-4a8177287c3a-kube-api-access-bwrb6\") pod \"openstack-operator-index-s7lls\" (UID: \"b74d47b4-62e8-4dd7-b89f-4a8177287c3a\") " pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.521742 4982 scope.go:117] "RemoveContainer" containerID="1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d" Jan 23 08:11:33 crc kubenswrapper[4982]: E0123 08:11:33.522855 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d\": container with ID starting with 1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d not found: ID does not exist" containerID="1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.522891 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d"} err="failed to get container status \"1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d\": rpc error: code = NotFound desc = could not find container \"1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d\": container with ID starting with 1aaf161f15028d2e427b0dda84e86edcfa598fc794713579ddc5d5d48546f32d not found: ID does not exist" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.532975 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-f77kp"] Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.534143 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwrb6\" (UniqueName: \"kubernetes.io/projected/b74d47b4-62e8-4dd7-b89f-4a8177287c3a-kube-api-access-bwrb6\") pod \"openstack-operator-index-s7lls\" (UID: \"b74d47b4-62e8-4dd7-b89f-4a8177287c3a\") " pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.540530 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-f77kp"] Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.551460 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.792942 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s7lls"] Jan 23 08:11:33 crc kubenswrapper[4982]: I0123 08:11:33.855926 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2259c9fb-de4b-4bea-a29b-18796e8cad74" path="/var/lib/kubelet/pods/2259c9fb-de4b-4bea-a29b-18796e8cad74/volumes" Jan 23 08:11:34 crc kubenswrapper[4982]: I0123 08:11:34.502049 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7lls" event={"ID":"b74d47b4-62e8-4dd7-b89f-4a8177287c3a","Type":"ContainerStarted","Data":"bc5e13fe954dfa6e4316b0efdca35c3501512448532b3e7622dfe29d9849d106"} Jan 23 08:11:34 crc kubenswrapper[4982]: I0123 08:11:34.502119 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s7lls" event={"ID":"b74d47b4-62e8-4dd7-b89f-4a8177287c3a","Type":"ContainerStarted","Data":"ef329e9afc761b4e58a8b4cd08c6f2c466afbbba7151a0d223bd96338e56b45e"} Jan 23 08:11:34 crc kubenswrapper[4982]: I0123 08:11:34.517372 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-s7lls" podStartSLOduration=1.060816305 podStartE2EDuration="1.517351493s" podCreationTimestamp="2026-01-23 08:11:33 +0000 UTC" firstStartedPulling="2026-01-23 08:11:33.817788097 +0000 UTC m=+968.298151483" lastFinishedPulling="2026-01-23 08:11:34.274323275 +0000 UTC m=+968.754686671" observedRunningTime="2026-01-23 08:11:34.513397697 +0000 UTC m=+968.993761083" watchObservedRunningTime="2026-01-23 08:11:34.517351493 +0000 UTC m=+968.997714879" Jan 23 08:11:41 crc kubenswrapper[4982]: I0123 08:11:41.436265 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:11:41 crc kubenswrapper[4982]: I0123 08:11:41.437242 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:11:43 crc kubenswrapper[4982]: I0123 08:11:43.551744 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:43 crc kubenswrapper[4982]: I0123 08:11:43.552037 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:43 crc kubenswrapper[4982]: I0123 08:11:43.585078 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:43 crc kubenswrapper[4982]: I0123 08:11:43.612664 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-s7lls" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.439118 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp"] Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.442887 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.445500 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tf66g" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.451373 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp"] Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.552862 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-util\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.552914 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-bundle\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.553134 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tls2w\" (UniqueName: \"kubernetes.io/projected/524eccad-4e68-4e0c-84e9-bc292ad9eb25-kube-api-access-tls2w\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.653771 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-util\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.653826 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-bundle\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.653879 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tls2w\" (UniqueName: \"kubernetes.io/projected/524eccad-4e68-4e0c-84e9-bc292ad9eb25-kube-api-access-tls2w\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.654484 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-util\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.654678 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-bundle\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.675144 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tls2w\" (UniqueName: \"kubernetes.io/projected/524eccad-4e68-4e0c-84e9-bc292ad9eb25-kube-api-access-tls2w\") pod \"97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:57 crc kubenswrapper[4982]: I0123 08:11:57.775081 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:11:58 crc kubenswrapper[4982]: I0123 08:11:58.191806 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp"] Jan 23 08:11:58 crc kubenswrapper[4982]: I0123 08:11:58.674055 4982 generic.go:334] "Generic (PLEG): container finished" podID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerID="2a460740e13a08b8c5b050c5164da34623f341738c5e505fef9c6b4cf4ce6a43" exitCode=0 Jan 23 08:11:58 crc kubenswrapper[4982]: I0123 08:11:58.674196 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" event={"ID":"524eccad-4e68-4e0c-84e9-bc292ad9eb25","Type":"ContainerDied","Data":"2a460740e13a08b8c5b050c5164da34623f341738c5e505fef9c6b4cf4ce6a43"} Jan 23 08:11:58 crc kubenswrapper[4982]: I0123 08:11:58.674525 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" event={"ID":"524eccad-4e68-4e0c-84e9-bc292ad9eb25","Type":"ContainerStarted","Data":"8a122fe9305f6ec428b4a5dc412d98b0c1e6f556c749e5115151c192de2d5134"} Jan 23 08:12:00 crc kubenswrapper[4982]: I0123 08:12:00.690789 4982 generic.go:334] "Generic (PLEG): container finished" podID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerID="ac5d7ab05a3e776b23ce5901a262a7f87247367c47030c9b0fb009df8f124790" exitCode=0 Jan 23 08:12:00 crc kubenswrapper[4982]: I0123 08:12:00.690899 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" event={"ID":"524eccad-4e68-4e0c-84e9-bc292ad9eb25","Type":"ContainerDied","Data":"ac5d7ab05a3e776b23ce5901a262a7f87247367c47030c9b0fb009df8f124790"} Jan 23 08:12:01 crc kubenswrapper[4982]: I0123 08:12:01.703796 4982 generic.go:334] "Generic (PLEG): container finished" podID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerID="695b95d4b45dca59f6346af0bba3eaa09b9a3633260530a5d93c5ffb9e688cd8" exitCode=0 Jan 23 08:12:01 crc kubenswrapper[4982]: I0123 08:12:01.703878 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" event={"ID":"524eccad-4e68-4e0c-84e9-bc292ad9eb25","Type":"ContainerDied","Data":"695b95d4b45dca59f6346af0bba3eaa09b9a3633260530a5d93c5ffb9e688cd8"} Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.021427 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.135560 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-bundle\") pod \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.135642 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-util\") pod \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.135758 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tls2w\" (UniqueName: \"kubernetes.io/projected/524eccad-4e68-4e0c-84e9-bc292ad9eb25-kube-api-access-tls2w\") pod \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\" (UID: \"524eccad-4e68-4e0c-84e9-bc292ad9eb25\") " Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.138404 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-bundle" (OuterVolumeSpecName: "bundle") pod "524eccad-4e68-4e0c-84e9-bc292ad9eb25" (UID: "524eccad-4e68-4e0c-84e9-bc292ad9eb25"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.143928 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524eccad-4e68-4e0c-84e9-bc292ad9eb25-kube-api-access-tls2w" (OuterVolumeSpecName: "kube-api-access-tls2w") pod "524eccad-4e68-4e0c-84e9-bc292ad9eb25" (UID: "524eccad-4e68-4e0c-84e9-bc292ad9eb25"). InnerVolumeSpecName "kube-api-access-tls2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.165043 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-util" (OuterVolumeSpecName: "util") pod "524eccad-4e68-4e0c-84e9-bc292ad9eb25" (UID: "524eccad-4e68-4e0c-84e9-bc292ad9eb25"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.237261 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tls2w\" (UniqueName: \"kubernetes.io/projected/524eccad-4e68-4e0c-84e9-bc292ad9eb25-kube-api-access-tls2w\") on node \"crc\" DevicePath \"\"" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.237313 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.237322 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/524eccad-4e68-4e0c-84e9-bc292ad9eb25-util\") on node \"crc\" DevicePath \"\"" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.720408 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" event={"ID":"524eccad-4e68-4e0c-84e9-bc292ad9eb25","Type":"ContainerDied","Data":"8a122fe9305f6ec428b4a5dc412d98b0c1e6f556c749e5115151c192de2d5134"} Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.720816 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a122fe9305f6ec428b4a5dc412d98b0c1e6f556c749e5115151c192de2d5134" Jan 23 08:12:03 crc kubenswrapper[4982]: I0123 08:12:03.720537 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.423329 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d"] Jan 23 08:12:05 crc kubenswrapper[4982]: E0123 08:12:05.428071 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerName="pull" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.428265 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerName="pull" Jan 23 08:12:05 crc kubenswrapper[4982]: E0123 08:12:05.428437 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerName="extract" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.428452 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerName="extract" Jan 23 08:12:05 crc kubenswrapper[4982]: E0123 08:12:05.428480 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerName="util" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.428490 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerName="util" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.429044 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="524eccad-4e68-4e0c-84e9-bc292ad9eb25" containerName="extract" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.429879 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.433541 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-6xsqz" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.463749 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d"] Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.473697 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdcv\" (UniqueName: \"kubernetes.io/projected/b5cf4a40-effb-4d2e-8d9b-a7818151f189-kube-api-access-8zdcv\") pod \"openstack-operator-controller-init-85bfd44c94-wq88d\" (UID: \"b5cf4a40-effb-4d2e-8d9b-a7818151f189\") " pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.574710 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdcv\" (UniqueName: \"kubernetes.io/projected/b5cf4a40-effb-4d2e-8d9b-a7818151f189-kube-api-access-8zdcv\") pod \"openstack-operator-controller-init-85bfd44c94-wq88d\" (UID: \"b5cf4a40-effb-4d2e-8d9b-a7818151f189\") " pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.591496 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdcv\" (UniqueName: \"kubernetes.io/projected/b5cf4a40-effb-4d2e-8d9b-a7818151f189-kube-api-access-8zdcv\") pod \"openstack-operator-controller-init-85bfd44c94-wq88d\" (UID: \"b5cf4a40-effb-4d2e-8d9b-a7818151f189\") " pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 08:12:05 crc kubenswrapper[4982]: I0123 08:12:05.763036 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 08:12:06 crc kubenswrapper[4982]: I0123 08:12:06.245943 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d"] Jan 23 08:12:06 crc kubenswrapper[4982]: I0123 08:12:06.741342 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" event={"ID":"b5cf4a40-effb-4d2e-8d9b-a7818151f189","Type":"ContainerStarted","Data":"e4801744ed373994ad99d5d0fe92dfed8245a7c657cca99bb3a331ec2c2b339e"} Jan 23 08:12:11 crc kubenswrapper[4982]: I0123 08:12:11.436199 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:12:11 crc kubenswrapper[4982]: I0123 08:12:11.437080 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:12:12 crc kubenswrapper[4982]: I0123 08:12:12.800690 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" event={"ID":"b5cf4a40-effb-4d2e-8d9b-a7818151f189","Type":"ContainerStarted","Data":"e103d7ff88ac2055ccfe8a666fcca4f34daf708cbd3a86ecadc8a19e5a64765e"} Jan 23 08:12:12 crc kubenswrapper[4982]: I0123 08:12:12.801034 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 08:12:12 crc kubenswrapper[4982]: I0123 08:12:12.831646 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" podStartSLOduration=1.8575453849999999 podStartE2EDuration="7.831612131s" podCreationTimestamp="2026-01-23 08:12:05 +0000 UTC" firstStartedPulling="2026-01-23 08:12:06.235064262 +0000 UTC m=+1000.715427648" lastFinishedPulling="2026-01-23 08:12:12.209131008 +0000 UTC m=+1006.689494394" observedRunningTime="2026-01-23 08:12:12.827315636 +0000 UTC m=+1007.307679032" watchObservedRunningTime="2026-01-23 08:12:12.831612131 +0000 UTC m=+1007.311975517" Jan 23 08:12:25 crc kubenswrapper[4982]: I0123 08:12:25.765617 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 08:12:41 crc kubenswrapper[4982]: I0123 08:12:41.435711 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:12:41 crc kubenswrapper[4982]: I0123 08:12:41.436267 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:12:41 crc kubenswrapper[4982]: I0123 08:12:41.436312 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:12:41 crc kubenswrapper[4982]: I0123 08:12:41.436956 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d24a630527fc6e0922fa5b0c623c16897e3532c564083acc123c1c83ab59cae"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:12:41 crc kubenswrapper[4982]: I0123 08:12:41.437007 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://1d24a630527fc6e0922fa5b0c623c16897e3532c564083acc123c1c83ab59cae" gracePeriod=600 Jan 23 08:12:43 crc kubenswrapper[4982]: I0123 08:12:43.035140 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="1d24a630527fc6e0922fa5b0c623c16897e3532c564083acc123c1c83ab59cae" exitCode=0 Jan 23 08:12:43 crc kubenswrapper[4982]: I0123 08:12:43.035842 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"1d24a630527fc6e0922fa5b0c623c16897e3532c564083acc123c1c83ab59cae"} Jan 23 08:12:43 crc kubenswrapper[4982]: I0123 08:12:43.035873 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"c77a76c13f380fffd54942065cc632450e7687109faf0320bb3cc074932836d4"} Jan 23 08:12:43 crc kubenswrapper[4982]: I0123 08:12:43.035892 4982 scope.go:117] "RemoveContainer" containerID="35ca4511cd3232156669a737407761628df1a5436b3552a0057dc7fc63d6e9b8" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.127606 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.131659 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.138281 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.139408 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zcc2k" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.146504 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.152241 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mpl29" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.159087 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.170524 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.171579 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.178927 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-l2j72" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.194406 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.197177 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-477tm\" (UniqueName: \"kubernetes.io/projected/0167eb4d-3772-402e-8451-69789e17dd5e-kube-api-access-477tm\") pod \"barbican-operator-controller-manager-59dd8b7cbf-jhcmh\" (UID: \"0167eb4d-3772-402e-8451-69789e17dd5e\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.205344 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.206503 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.210311 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4stb4" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.228698 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.232807 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.255780 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.257145 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.265350 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.267201 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vzc77" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.271937 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.273352 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.276345 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xrcb7" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.293936 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.295198 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.298733 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9fx\" (UniqueName: \"kubernetes.io/projected/1be00a16-06c6-4fe5-9626-149c854680f8-kube-api-access-vj9fx\") pod \"cinder-operator-controller-manager-69cf5d4557-8rr6c\" (UID: \"1be00a16-06c6-4fe5-9626-149c854680f8\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.298785 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg479\" (UniqueName: \"kubernetes.io/projected/a00a88ad-d184-4394-9231-4da810afad02-kube-api-access-wg479\") pod \"designate-operator-controller-manager-b45d7bf98-pm6mp\" (UID: \"a00a88ad-d184-4394-9231-4da810afad02\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.298810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-477tm\" (UniqueName: \"kubernetes.io/projected/0167eb4d-3772-402e-8451-69789e17dd5e-kube-api-access-477tm\") pod \"barbican-operator-controller-manager-59dd8b7cbf-jhcmh\" (UID: \"0167eb4d-3772-402e-8451-69789e17dd5e\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.298898 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq7kw\" (UniqueName: \"kubernetes.io/projected/d6169561-3b43-4dcd-8fa6-f08d484bccdf-kube-api-access-xq7kw\") pod \"glance-operator-controller-manager-78fdd796fd-glq8j\" (UID: \"d6169561-3b43-4dcd-8fa6-f08d484bccdf\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.305065 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.305305 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-82krc" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.329771 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.330719 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.333030 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-p5bj7" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.346371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-477tm\" (UniqueName: \"kubernetes.io/projected/0167eb4d-3772-402e-8451-69789e17dd5e-kube-api-access-477tm\") pod \"barbican-operator-controller-manager-59dd8b7cbf-jhcmh\" (UID: \"0167eb4d-3772-402e-8451-69789e17dd5e\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.350680 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.373589 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.386389 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.399078 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.400806 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402307 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402364 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq7kw\" (UniqueName: \"kubernetes.io/projected/d6169561-3b43-4dcd-8fa6-f08d484bccdf-kube-api-access-xq7kw\") pod \"glance-operator-controller-manager-78fdd796fd-glq8j\" (UID: \"d6169561-3b43-4dcd-8fa6-f08d484bccdf\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402399 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg479\" (UniqueName: \"kubernetes.io/projected/5656681c-2a30-47fa-bb61-f33660c36db0-kube-api-access-kg479\") pod \"heat-operator-controller-manager-594c8c9d5d-gvwjl\" (UID: \"5656681c-2a30-47fa-bb61-f33660c36db0\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402422 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9fx\" (UniqueName: \"kubernetes.io/projected/1be00a16-06c6-4fe5-9626-149c854680f8-kube-api-access-vj9fx\") pod \"cinder-operator-controller-manager-69cf5d4557-8rr6c\" (UID: \"1be00a16-06c6-4fe5-9626-149c854680f8\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402449 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg479\" (UniqueName: \"kubernetes.io/projected/a00a88ad-d184-4394-9231-4da810afad02-kube-api-access-wg479\") pod \"designate-operator-controller-manager-b45d7bf98-pm6mp\" (UID: \"a00a88ad-d184-4394-9231-4da810afad02\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402472 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzww7\" (UniqueName: \"kubernetes.io/projected/5285fcd9-54a1-4499-8930-af660f95709a-kube-api-access-hzww7\") pod \"horizon-operator-controller-manager-77d5c5b54f-44mmv\" (UID: \"5285fcd9-54a1-4499-8930-af660f95709a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402518 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lchx\" (UniqueName: \"kubernetes.io/projected/801f503b-36aa-441d-9487-8fedc3365d46-kube-api-access-8lchx\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.402541 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl6cd\" (UniqueName: \"kubernetes.io/projected/854d661c-b4ec-4046-85d3-3e643336d93c-kube-api-access-dl6cd\") pod \"ironic-operator-controller-manager-69d6c9f5b8-tt4pp\" (UID: \"854d661c-b4ec-4046-85d3-3e643336d93c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.403161 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-4dv79" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.435963 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq7kw\" (UniqueName: \"kubernetes.io/projected/d6169561-3b43-4dcd-8fa6-f08d484bccdf-kube-api-access-xq7kw\") pod \"glance-operator-controller-manager-78fdd796fd-glq8j\" (UID: \"d6169561-3b43-4dcd-8fa6-f08d484bccdf\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.439197 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg479\" (UniqueName: \"kubernetes.io/projected/a00a88ad-d184-4394-9231-4da810afad02-kube-api-access-wg479\") pod \"designate-operator-controller-manager-b45d7bf98-pm6mp\" (UID: \"a00a88ad-d184-4394-9231-4da810afad02\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.442284 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9fx\" (UniqueName: \"kubernetes.io/projected/1be00a16-06c6-4fe5-9626-149c854680f8-kube-api-access-vj9fx\") pod \"cinder-operator-controller-manager-69cf5d4557-8rr6c\" (UID: \"1be00a16-06c6-4fe5-9626-149c854680f8\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.459356 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.460449 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.485687 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.486745 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.488072 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.495016 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4nhsj" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.499787 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.507841 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg479\" (UniqueName: \"kubernetes.io/projected/5656681c-2a30-47fa-bb61-f33660c36db0-kube-api-access-kg479\") pod \"heat-operator-controller-manager-594c8c9d5d-gvwjl\" (UID: \"5656681c-2a30-47fa-bb61-f33660c36db0\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.507915 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntvfk\" (UniqueName: \"kubernetes.io/projected/ae1bdd5c-8116-47e9-9344-f1d84794541c-kube-api-access-ntvfk\") pod \"keystone-operator-controller-manager-b8b6d4659-4nz5s\" (UID: \"ae1bdd5c-8116-47e9-9344-f1d84794541c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.507968 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzww7\" (UniqueName: \"kubernetes.io/projected/5285fcd9-54a1-4499-8930-af660f95709a-kube-api-access-hzww7\") pod \"horizon-operator-controller-manager-77d5c5b54f-44mmv\" (UID: \"5285fcd9-54a1-4499-8930-af660f95709a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.508066 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lchx\" (UniqueName: \"kubernetes.io/projected/801f503b-36aa-441d-9487-8fedc3365d46-kube-api-access-8lchx\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.508096 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl6cd\" (UniqueName: \"kubernetes.io/projected/854d661c-b4ec-4046-85d3-3e643336d93c-kube-api-access-dl6cd\") pod \"ironic-operator-controller-manager-69d6c9f5b8-tt4pp\" (UID: \"854d661c-b4ec-4046-85d3-3e643336d93c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.508164 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:47 crc kubenswrapper[4982]: E0123 08:12:47.508308 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:47 crc kubenswrapper[4982]: E0123 08:12:47.508361 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert podName:801f503b-36aa-441d-9487-8fedc3365d46 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:48.008344475 +0000 UTC m=+1042.488707851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert") pod "infra-operator-controller-manager-54ccf4f85d-gvxjc" (UID: "801f503b-36aa-441d-9487-8fedc3365d46") : secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.530950 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.542640 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.543734 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.549654 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gwlvl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.560969 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl6cd\" (UniqueName: \"kubernetes.io/projected/854d661c-b4ec-4046-85d3-3e643336d93c-kube-api-access-dl6cd\") pod \"ironic-operator-controller-manager-69d6c9f5b8-tt4pp\" (UID: \"854d661c-b4ec-4046-85d3-3e643336d93c\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.561489 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg479\" (UniqueName: \"kubernetes.io/projected/5656681c-2a30-47fa-bb61-f33660c36db0-kube-api-access-kg479\") pod \"heat-operator-controller-manager-594c8c9d5d-gvwjl\" (UID: \"5656681c-2a30-47fa-bb61-f33660c36db0\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.572250 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lchx\" (UniqueName: \"kubernetes.io/projected/801f503b-36aa-441d-9487-8fedc3365d46-kube-api-access-8lchx\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.572259 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzww7\" (UniqueName: \"kubernetes.io/projected/5285fcd9-54a1-4499-8930-af660f95709a-kube-api-access-hzww7\") pod \"horizon-operator-controller-manager-77d5c5b54f-44mmv\" (UID: \"5285fcd9-54a1-4499-8930-af660f95709a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.588425 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.590703 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.595738 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.601841 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.609496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gscc\" (UniqueName: \"kubernetes.io/projected/6f86042f-a06a-4756-96ae-19de64621eb4-kube-api-access-9gscc\") pod \"manila-operator-controller-manager-78c6999f6f-hfrk2\" (UID: \"6f86042f-a06a-4756-96ae-19de64621eb4\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.609555 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntvfk\" (UniqueName: \"kubernetes.io/projected/ae1bdd5c-8116-47e9-9344-f1d84794541c-kube-api-access-ntvfk\") pod \"keystone-operator-controller-manager-b8b6d4659-4nz5s\" (UID: \"ae1bdd5c-8116-47e9-9344-f1d84794541c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.609587 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt2rf\" (UniqueName: \"kubernetes.io/projected/cb115f06-d81c-4b67-af3f-454ffd70d358-kube-api-access-xt2rf\") pod \"mariadb-operator-controller-manager-c87fff755-lcpcl\" (UID: \"cb115f06-d81c-4b67-af3f-454ffd70d358\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.613661 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.614710 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.616934 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mmwtj" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.622541 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.634304 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.657321 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.658437 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qfffj" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.659594 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.661401 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-lwmpx" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.663187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntvfk\" (UniqueName: \"kubernetes.io/projected/ae1bdd5c-8116-47e9-9344-f1d84794541c-kube-api-access-ntvfk\") pod \"keystone-operator-controller-manager-b8b6d4659-4nz5s\" (UID: \"ae1bdd5c-8116-47e9-9344-f1d84794541c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.707912 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.708332 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.712032 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ndc7\" (UniqueName: \"kubernetes.io/projected/9695f150-036e-4e4b-a857-7af97f1576e5-kube-api-access-2ndc7\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwfjx\" (UID: \"9695f150-036e-4e4b-a857-7af97f1576e5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.712147 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gscc\" (UniqueName: \"kubernetes.io/projected/6f86042f-a06a-4756-96ae-19de64621eb4-kube-api-access-9gscc\") pod \"manila-operator-controller-manager-78c6999f6f-hfrk2\" (UID: \"6f86042f-a06a-4756-96ae-19de64621eb4\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.712188 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt2rf\" (UniqueName: \"kubernetes.io/projected/cb115f06-d81c-4b67-af3f-454ffd70d358-kube-api-access-xt2rf\") pod \"mariadb-operator-controller-manager-c87fff755-lcpcl\" (UID: \"cb115f06-d81c-4b67-af3f-454ffd70d358\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.712216 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brrnf\" (UniqueName: \"kubernetes.io/projected/a73a3899-9dfc-404d-b7cd-7eaf32de406a-kube-api-access-brrnf\") pod \"neutron-operator-controller-manager-5d8f59fb49-5xrcb\" (UID: \"a73a3899-9dfc-404d-b7cd-7eaf32de406a\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.738161 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt2rf\" (UniqueName: \"kubernetes.io/projected/cb115f06-d81c-4b67-af3f-454ffd70d358-kube-api-access-xt2rf\") pod \"mariadb-operator-controller-manager-c87fff755-lcpcl\" (UID: \"cb115f06-d81c-4b67-af3f-454ffd70d358\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.742189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gscc\" (UniqueName: \"kubernetes.io/projected/6f86042f-a06a-4756-96ae-19de64621eb4-kube-api-access-9gscc\") pod \"manila-operator-controller-manager-78c6999f6f-hfrk2\" (UID: \"6f86042f-a06a-4756-96ae-19de64621eb4\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.764220 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.782268 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.803076 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.822397 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ndc7\" (UniqueName: \"kubernetes.io/projected/9695f150-036e-4e4b-a857-7af97f1576e5-kube-api-access-2ndc7\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwfjx\" (UID: \"9695f150-036e-4e4b-a857-7af97f1576e5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.822488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvb2f\" (UniqueName: \"kubernetes.io/projected/efce0465-d792-44d6-b0c6-48f3e74ccd7a-kube-api-access-kvb2f\") pod \"octavia-operator-controller-manager-7bd9774b6-q4hlv\" (UID: \"efce0465-d792-44d6-b0c6-48f3e74ccd7a\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.822581 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brrnf\" (UniqueName: \"kubernetes.io/projected/a73a3899-9dfc-404d-b7cd-7eaf32de406a-kube-api-access-brrnf\") pod \"neutron-operator-controller-manager-5d8f59fb49-5xrcb\" (UID: \"a73a3899-9dfc-404d-b7cd-7eaf32de406a\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.826959 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.828018 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.830042 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.872513 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2gl5v" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.875320 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.878520 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.878924 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.879061 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.879087 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.880165 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.881827 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-c2rpf" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.882075 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9gkxz" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.883401 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.885189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brrnf\" (UniqueName: \"kubernetes.io/projected/a73a3899-9dfc-404d-b7cd-7eaf32de406a-kube-api-access-brrnf\") pod \"neutron-operator-controller-manager-5d8f59fb49-5xrcb\" (UID: \"a73a3899-9dfc-404d-b7cd-7eaf32de406a\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.890087 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.890096 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ndc7\" (UniqueName: \"kubernetes.io/projected/9695f150-036e-4e4b-a857-7af97f1576e5-kube-api-access-2ndc7\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwfjx\" (UID: \"9695f150-036e-4e4b-a857-7af97f1576e5\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.910804 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.912033 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.917357 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.919000 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.920444 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w9ltb" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.920889 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-whxzj" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.924024 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvb2f\" (UniqueName: \"kubernetes.io/projected/efce0465-d792-44d6-b0c6-48f3e74ccd7a-kube-api-access-kvb2f\") pod \"octavia-operator-controller-manager-7bd9774b6-q4hlv\" (UID: \"efce0465-d792-44d6-b0c6-48f3e74ccd7a\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.924071 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.924160 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vbl\" (UniqueName: \"kubernetes.io/projected/3d5f4f8b-84b1-45e2-b393-254c24c090a8-kube-api-access-f2vbl\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.946057 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89"] Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.977006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvb2f\" (UniqueName: \"kubernetes.io/projected/efce0465-d792-44d6-b0c6-48f3e74ccd7a-kube-api-access-kvb2f\") pod \"octavia-operator-controller-manager-7bd9774b6-q4hlv\" (UID: \"efce0465-d792-44d6-b0c6-48f3e74ccd7a\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 08:12:47 crc kubenswrapper[4982]: I0123 08:12:47.997123 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.008826 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.025236 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2vbl\" (UniqueName: \"kubernetes.io/projected/3d5f4f8b-84b1-45e2-b393-254c24c090a8-kube-api-access-f2vbl\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.025285 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76v4\" (UniqueName: \"kubernetes.io/projected/75a3ce99-813c-45c1-9b30-0bb0b47f26a6-kube-api-access-v76v4\") pod \"placement-operator-controller-manager-5d646b7d76-mwfhl\" (UID: \"75a3ce99-813c-45c1-9b30-0bb0b47f26a6\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.025377 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.025406 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.025470 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx2s7\" (UniqueName: \"kubernetes.io/projected/91102982-af50-41cf-9969-28c92bdf8f36-kube-api-access-kx2s7\") pod \"ovn-operator-controller-manager-55db956ddc-r2wx4\" (UID: \"91102982-af50-41cf-9969-28c92bdf8f36\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.025492 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbgc\" (UniqueName: \"kubernetes.io/projected/34541257-bf45-41a5-a756-393b32053416-kube-api-access-btbgc\") pod \"telemetry-operator-controller-manager-85cd9769bb-dddts\" (UID: \"34541257-bf45-41a5-a756-393b32053416\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.025517 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxfv\" (UniqueName: \"kubernetes.io/projected/e57a3249-8bfe-49da-88fa-dbc887422c3c-kube-api-access-9mxfv\") pod \"swift-operator-controller-manager-547cbdb99f-lqf89\" (UID: \"e57a3249-8bfe-49da-88fa-dbc887422c3c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.025881 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.025919 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert podName:801f503b-36aa-441d-9487-8fedc3365d46 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:49.025903523 +0000 UTC m=+1043.506266909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert") pod "infra-operator-controller-manager-54ccf4f85d-gvxjc" (UID: "801f503b-36aa-441d-9487-8fedc3365d46") : secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.026156 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.026187 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert podName:3d5f4f8b-84b1-45e2-b393-254c24c090a8 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:48.526179531 +0000 UTC m=+1043.006542917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" (UID: "3d5f4f8b-84b1-45e2-b393-254c24c090a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.026218 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.027340 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.033490 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-znbnh" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.047191 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.048618 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.061845 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.062597 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2vbl\" (UniqueName: \"kubernetes.io/projected/3d5f4f8b-84b1-45e2-b393-254c24c090a8-kube-api-access-f2vbl\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.087353 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.127056 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx2s7\" (UniqueName: \"kubernetes.io/projected/91102982-af50-41cf-9969-28c92bdf8f36-kube-api-access-kx2s7\") pod \"ovn-operator-controller-manager-55db956ddc-r2wx4\" (UID: \"91102982-af50-41cf-9969-28c92bdf8f36\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.127110 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btbgc\" (UniqueName: \"kubernetes.io/projected/34541257-bf45-41a5-a756-393b32053416-kube-api-access-btbgc\") pod \"telemetry-operator-controller-manager-85cd9769bb-dddts\" (UID: \"34541257-bf45-41a5-a756-393b32053416\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.127139 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mxfv\" (UniqueName: \"kubernetes.io/projected/e57a3249-8bfe-49da-88fa-dbc887422c3c-kube-api-access-9mxfv\") pod \"swift-operator-controller-manager-547cbdb99f-lqf89\" (UID: \"e57a3249-8bfe-49da-88fa-dbc887422c3c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.127189 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76v4\" (UniqueName: \"kubernetes.io/projected/75a3ce99-813c-45c1-9b30-0bb0b47f26a6-kube-api-access-v76v4\") pod \"placement-operator-controller-manager-5d646b7d76-mwfhl\" (UID: \"75a3ce99-813c-45c1-9b30-0bb0b47f26a6\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.127216 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g65r\" (UniqueName: \"kubernetes.io/projected/564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c-kube-api-access-8g65r\") pod \"test-operator-controller-manager-69797bbcbd-p6jkk\" (UID: \"564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.129855 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.138045 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.142454 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hh2x9" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.146176 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx2s7\" (UniqueName: \"kubernetes.io/projected/91102982-af50-41cf-9969-28c92bdf8f36-kube-api-access-kx2s7\") pod \"ovn-operator-controller-manager-55db956ddc-r2wx4\" (UID: \"91102982-af50-41cf-9969-28c92bdf8f36\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.148170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mxfv\" (UniqueName: \"kubernetes.io/projected/e57a3249-8bfe-49da-88fa-dbc887422c3c-kube-api-access-9mxfv\") pod \"swift-operator-controller-manager-547cbdb99f-lqf89\" (UID: \"e57a3249-8bfe-49da-88fa-dbc887422c3c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.160448 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76v4\" (UniqueName: \"kubernetes.io/projected/75a3ce99-813c-45c1-9b30-0bb0b47f26a6-kube-api-access-v76v4\") pod \"placement-operator-controller-manager-5d646b7d76-mwfhl\" (UID: \"75a3ce99-813c-45c1-9b30-0bb0b47f26a6\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.161253 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btbgc\" (UniqueName: \"kubernetes.io/projected/34541257-bf45-41a5-a756-393b32053416-kube-api-access-btbgc\") pod \"telemetry-operator-controller-manager-85cd9769bb-dddts\" (UID: \"34541257-bf45-41a5-a756-393b32053416\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.178950 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.228523 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g65r\" (UniqueName: \"kubernetes.io/projected/564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c-kube-api-access-8g65r\") pod \"test-operator-controller-manager-69797bbcbd-p6jkk\" (UID: \"564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.228856 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82tjn\" (UniqueName: \"kubernetes.io/projected/16b1155b-baa1-467b-947f-36673d41a1a0-kube-api-access-82tjn\") pod \"watcher-operator-controller-manager-5ffb9c6597-ptg2h\" (UID: \"16b1155b-baa1-467b-947f-36673d41a1a0\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.233438 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.251677 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g65r\" (UniqueName: \"kubernetes.io/projected/564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c-kube-api-access-8g65r\") pod \"test-operator-controller-manager-69797bbcbd-p6jkk\" (UID: \"564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.283442 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.284570 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.288419 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.289507 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.293709 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-msw7t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.299278 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.331699 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.332117 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82tjn\" (UniqueName: \"kubernetes.io/projected/16b1155b-baa1-467b-947f-36673d41a1a0-kube-api-access-82tjn\") pod \"watcher-operator-controller-manager-5ffb9c6597-ptg2h\" (UID: \"16b1155b-baa1-467b-947f-36673d41a1a0\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.363855 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82tjn\" (UniqueName: \"kubernetes.io/projected/16b1155b-baa1-467b-947f-36673d41a1a0-kube-api-access-82tjn\") pod \"watcher-operator-controller-manager-5ffb9c6597-ptg2h\" (UID: \"16b1155b-baa1-467b-947f-36673d41a1a0\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.364858 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.370931 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.374205 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.376524 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-8kf7c" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.379950 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.404122 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.411594 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.434045 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjnb\" (UniqueName: \"kubernetes.io/projected/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-kube-api-access-vgjnb\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.434178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.434319 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q87h4\" (UniqueName: \"kubernetes.io/projected/96c1e2e5-87b0-4341-83cd-5b623de95215-kube-api-access-q87h4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nc6w7\" (UID: \"96c1e2e5-87b0-4341-83cd-5b623de95215\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.434376 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.453976 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.491860 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.536009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.536395 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q87h4\" (UniqueName: \"kubernetes.io/projected/96c1e2e5-87b0-4341-83cd-5b623de95215-kube-api-access-q87h4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nc6w7\" (UID: \"96c1e2e5-87b0-4341-83cd-5b623de95215\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.536437 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.536480 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjnb\" (UniqueName: \"kubernetes.io/projected/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-kube-api-access-vgjnb\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.536505 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.536728 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.536780 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert podName:3d5f4f8b-84b1-45e2-b393-254c24c090a8 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:49.536765474 +0000 UTC m=+1044.017128860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" (UID: "3d5f4f8b-84b1-45e2-b393-254c24c090a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.536784 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.536853 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:49.036835486 +0000 UTC m=+1043.517198872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "webhook-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.537170 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: E0123 08:12:48.537192 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:49.037185045 +0000 UTC m=+1043.517548431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "metrics-server-cert" not found Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.564751 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjnb\" (UniqueName: \"kubernetes.io/projected/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-kube-api-access-vgjnb\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.571278 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q87h4\" (UniqueName: \"kubernetes.io/projected/96c1e2e5-87b0-4341-83cd-5b623de95215-kube-api-access-q87h4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nc6w7\" (UID: \"96c1e2e5-87b0-4341-83cd-5b623de95215\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.589201 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.602653 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.759513 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.772806 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.780956 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl"] Jan 23 08:12:48 crc kubenswrapper[4982]: W0123 08:12:48.781402 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod854d661c_b4ec_4046_85d3_3e643336d93c.slice/crio-9d45bbb2c7cc0209da0e57039df8f0ad8ba8cfb27c1cb1c8beac28420c04b055 WatchSource:0}: Error finding container 9d45bbb2c7cc0209da0e57039df8f0ad8ba8cfb27c1cb1c8beac28420c04b055: Status 404 returned error can't find the container with id 9d45bbb2c7cc0209da0e57039df8f0ad8ba8cfb27c1cb1c8beac28420c04b055 Jan 23 08:12:48 crc kubenswrapper[4982]: W0123 08:12:48.795656 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5656681c_2a30_47fa_bb61_f33660c36db0.slice/crio-e1e966e6a086cf1447780a8e95d6a7783fbca4be0c693b6d7dee1a740cbf648a WatchSource:0}: Error finding container e1e966e6a086cf1447780a8e95d6a7783fbca4be0c693b6d7dee1a740cbf648a: Status 404 returned error can't find the container with id e1e966e6a086cf1447780a8e95d6a7783fbca4be0c693b6d7dee1a740cbf648a Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.823978 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.911185 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.925765 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp"] Jan 23 08:12:48 crc kubenswrapper[4982]: I0123 08:12:48.941418 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2"] Jan 23 08:12:48 crc kubenswrapper[4982]: W0123 08:12:48.947698 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f86042f_a06a_4756_96ae_19de64621eb4.slice/crio-778c8a280d47136a0c6d4923329b494cd3db2db7e2c3e88d90775237b87f1324 WatchSource:0}: Error finding container 778c8a280d47136a0c6d4923329b494cd3db2db7e2c3e88d90775237b87f1324: Status 404 returned error can't find the container with id 778c8a280d47136a0c6d4923329b494cd3db2db7e2c3e88d90775237b87f1324 Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.050986 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.051041 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.051103 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.051207 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.051208 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.051263 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:50.051247141 +0000 UTC m=+1044.531610527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "metrics-server-cert" not found Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.051277 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:50.051271872 +0000 UTC m=+1044.531635258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "webhook-server-cert" not found Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.051305 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.051407 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert podName:801f503b-36aa-441d-9487-8fedc3365d46 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:51.051384905 +0000 UTC m=+1045.531748291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert") pod "infra-operator-controller-manager-54ccf4f85d-gvxjc" (UID: "801f503b-36aa-441d-9487-8fedc3365d46") : secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.108354 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx"] Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.114806 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89"] Jan 23 08:12:49 crc kubenswrapper[4982]: W0123 08:12:49.115177 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode57a3249_8bfe_49da_88fa_dbc887422c3c.slice/crio-2d2c2febfb9623799d4cb2c04eb6e0ee993edf331f3427254b668636dac2ebad WatchSource:0}: Error finding container 2d2c2febfb9623799d4cb2c04eb6e0ee993edf331f3427254b668636dac2ebad: Status 404 returned error can't find the container with id 2d2c2febfb9623799d4cb2c04eb6e0ee993edf331f3427254b668636dac2ebad Jan 23 08:12:49 crc kubenswrapper[4982]: W0123 08:12:49.117325 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9695f150_036e_4e4b_a857_7af97f1576e5.slice/crio-56ed2d267d7152a57f276472ca5ddc0ca3d0515a9968dabdf1e1e94194fbd71b WatchSource:0}: Error finding container 56ed2d267d7152a57f276472ca5ddc0ca3d0515a9968dabdf1e1e94194fbd71b: Status 404 returned error can't find the container with id 56ed2d267d7152a57f276472ca5ddc0ca3d0515a9968dabdf1e1e94194fbd71b Jan 23 08:12:49 crc kubenswrapper[4982]: W0123 08:12:49.119679 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb115f06_d81c_4b67_af3f_454ffd70d358.slice/crio-ba6f675e9d988abbccfa53addfe3f8b4b721c51a779e05f6dcd733bcd3621792 WatchSource:0}: Error finding container ba6f675e9d988abbccfa53addfe3f8b4b721c51a779e05f6dcd733bcd3621792: Status 404 returned error can't find the container with id ba6f675e9d988abbccfa53addfe3f8b4b721c51a779e05f6dcd733bcd3621792 Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.121195 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl"] Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.147962 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" event={"ID":"5656681c-2a30-47fa-bb61-f33660c36db0","Type":"ContainerStarted","Data":"e1e966e6a086cf1447780a8e95d6a7783fbca4be0c693b6d7dee1a740cbf648a"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.148789 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" event={"ID":"ae1bdd5c-8116-47e9-9344-f1d84794541c","Type":"ContainerStarted","Data":"cfa8399f46c011ec7e927d7158d5e57710150961a9315964eb204b301e474023"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.149775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" event={"ID":"e57a3249-8bfe-49da-88fa-dbc887422c3c","Type":"ContainerStarted","Data":"2d2c2febfb9623799d4cb2c04eb6e0ee993edf331f3427254b668636dac2ebad"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.150907 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" event={"ID":"1be00a16-06c6-4fe5-9626-149c854680f8","Type":"ContainerStarted","Data":"568df2935e41bf1c67162cea8f6a92efd7539d916bfdb0508f13414203c5609e"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.152215 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" event={"ID":"6f86042f-a06a-4756-96ae-19de64621eb4","Type":"ContainerStarted","Data":"778c8a280d47136a0c6d4923329b494cd3db2db7e2c3e88d90775237b87f1324"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.153204 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" event={"ID":"5285fcd9-54a1-4499-8930-af660f95709a","Type":"ContainerStarted","Data":"07a794342626abc12edb1f7fe89bbad4d820f0119ea6f713d294386d0fb10fa7"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.154192 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" event={"ID":"854d661c-b4ec-4046-85d3-3e643336d93c","Type":"ContainerStarted","Data":"9d45bbb2c7cc0209da0e57039df8f0ad8ba8cfb27c1cb1c8beac28420c04b055"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.155108 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" event={"ID":"cb115f06-d81c-4b67-af3f-454ffd70d358","Type":"ContainerStarted","Data":"ba6f675e9d988abbccfa53addfe3f8b4b721c51a779e05f6dcd733bcd3621792"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.156746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" event={"ID":"0167eb4d-3772-402e-8451-69789e17dd5e","Type":"ContainerStarted","Data":"a7be20d7942f031501f3fb9aabf8bc51ddb1fee1180f202fa9f850affa60075c"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.162298 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" event={"ID":"d6169561-3b43-4dcd-8fa6-f08d484bccdf","Type":"ContainerStarted","Data":"9ede6ab5db9af124b6708765cbcbe434690399dda94479fe182e01b9c9001fa1"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.163375 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" event={"ID":"9695f150-036e-4e4b-a857-7af97f1576e5","Type":"ContainerStarted","Data":"56ed2d267d7152a57f276472ca5ddc0ca3d0515a9968dabdf1e1e94194fbd71b"} Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.164535 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" event={"ID":"a00a88ad-d184-4394-9231-4da810afad02","Type":"ContainerStarted","Data":"055efdd931d5797321034518025b76ab8424ff0a0aa7fc73744df13ce76119e6"} Jan 23 08:12:49 crc kubenswrapper[4982]: W0123 08:12:49.169415 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda73a3899_9dfc_404d_b7cd_7eaf32de406a.slice/crio-5073e89f07e3d46116ef8428fd637b22315659560402b243938b6c2978b742dc WatchSource:0}: Error finding container 5073e89f07e3d46116ef8428fd637b22315659560402b243938b6c2978b742dc: Status 404 returned error can't find the container with id 5073e89f07e3d46116ef8428fd637b22315659560402b243938b6c2978b742dc Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.174688 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb"] Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.291769 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4"] Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.297917 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btbgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-dddts_openstack-operators(34541257-bf45-41a5-a756-393b32053416): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.299124 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" podUID="34541257-bf45-41a5-a756-393b32053416" Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.303202 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts"] Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.306410 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kx2s7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-r2wx4_openstack-operators(91102982-af50-41cf-9969-28c92bdf8f36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:12:49 crc kubenswrapper[4982]: W0123 08:12:49.307293 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16b1155b_baa1_467b_947f_36673d41a1a0.slice/crio-b2b328f463812b14287a61277fc9da41297f0c553b237b9c89e2055d462554ba WatchSource:0}: Error finding container b2b328f463812b14287a61277fc9da41297f0c553b237b9c89e2055d462554ba: Status 404 returned error can't find the container with id b2b328f463812b14287a61277fc9da41297f0c553b237b9c89e2055d462554ba Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.308385 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" podUID="91102982-af50-41cf-9969-28c92bdf8f36" Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.310831 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv"] Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.311292 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82tjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-ptg2h_openstack-operators(16b1155b-baa1-467b-947f-36673d41a1a0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:12:49 crc kubenswrapper[4982]: W0123 08:12:49.312016 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a3ce99_813c_45c1_9b30_0bb0b47f26a6.slice/crio-375f30662f1dbab2bc44c4b10275b725a9e89f2565a8637c22336550ddd10b66 WatchSource:0}: Error finding container 375f30662f1dbab2bc44c4b10275b725a9e89f2565a8637c22336550ddd10b66: Status 404 returned error can't find the container with id 375f30662f1dbab2bc44c4b10275b725a9e89f2565a8637c22336550ddd10b66 Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.312285 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvb2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-q4hlv_openstack-operators(efce0465-d792-44d6-b0c6-48f3e74ccd7a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.312743 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" podUID="16b1155b-baa1-467b-947f-36673d41a1a0" Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.313383 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" podUID="efce0465-d792-44d6-b0c6-48f3e74ccd7a" Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.313918 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v76v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-mwfhl_openstack-operators(75a3ce99-813c-45c1-9b30-0bb0b47f26a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.315292 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" podUID="75a3ce99-813c-45c1-9b30-0bb0b47f26a6" Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.317136 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk"] Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.325128 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h"] Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.331888 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl"] Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.448472 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7"] Jan 23 08:12:49 crc kubenswrapper[4982]: I0123 08:12:49.559386 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.559543 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:49 crc kubenswrapper[4982]: E0123 08:12:49.559643 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert podName:3d5f4f8b-84b1-45e2-b393-254c24c090a8 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:51.559608756 +0000 UTC m=+1046.039972142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" (UID: "3d5f4f8b-84b1-45e2-b393-254c24c090a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.066942 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.067347 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.067172 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.067671 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:52.067618253 +0000 UTC m=+1046.547981649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "webhook-server-cert" not found Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.067558 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.068438 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:52.068411134 +0000 UTC m=+1046.548774590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "metrics-server-cert" not found Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.182279 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" event={"ID":"564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c","Type":"ContainerStarted","Data":"2f6ed9c4f8f95e5472626b844dd04878649abe04b63d8ae67961c66a65490c7a"} Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.185440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" event={"ID":"34541257-bf45-41a5-a756-393b32053416","Type":"ContainerStarted","Data":"f224bf7db4c357312ee2d106c108d67f99f8828040e410854e6d14c78ce89d4f"} Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.189455 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" event={"ID":"16b1155b-baa1-467b-947f-36673d41a1a0","Type":"ContainerStarted","Data":"b2b328f463812b14287a61277fc9da41297f0c553b237b9c89e2055d462554ba"} Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.191788 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" podUID="34541257-bf45-41a5-a756-393b32053416" Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.215991 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" podUID="16b1155b-baa1-467b-947f-36673d41a1a0" Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.217956 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" event={"ID":"91102982-af50-41cf-9969-28c92bdf8f36","Type":"ContainerStarted","Data":"8483a794091b77bd177c6621e38cdbc710257ba15734d0893760cd0f1d84d0ea"} Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.223517 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" podUID="91102982-af50-41cf-9969-28c92bdf8f36" Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.239443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" event={"ID":"a73a3899-9dfc-404d-b7cd-7eaf32de406a","Type":"ContainerStarted","Data":"5073e89f07e3d46116ef8428fd637b22315659560402b243938b6c2978b742dc"} Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.242445 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" event={"ID":"75a3ce99-813c-45c1-9b30-0bb0b47f26a6","Type":"ContainerStarted","Data":"375f30662f1dbab2bc44c4b10275b725a9e89f2565a8637c22336550ddd10b66"} Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.245718 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" podUID="75a3ce99-813c-45c1-9b30-0bb0b47f26a6" Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.246640 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" event={"ID":"efce0465-d792-44d6-b0c6-48f3e74ccd7a","Type":"ContainerStarted","Data":"1215c937331f9516a85763a4ba6ab4c1a4e8bea15227124b28b1f6bfa3819c16"} Jan 23 08:12:50 crc kubenswrapper[4982]: I0123 08:12:50.247631 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" event={"ID":"96c1e2e5-87b0-4341-83cd-5b623de95215","Type":"ContainerStarted","Data":"646688dd2e327592bc88892d7143f5489e2351167b8aa2904ccadeb820b631f0"} Jan 23 08:12:50 crc kubenswrapper[4982]: E0123 08:12:50.249297 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" podUID="efce0465-d792-44d6-b0c6-48f3e74ccd7a" Jan 23 08:12:51 crc kubenswrapper[4982]: I0123 08:12:51.123664 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.123986 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.124046 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert podName:801f503b-36aa-441d-9487-8fedc3365d46 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:55.124027912 +0000 UTC m=+1049.604391298 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert") pod "infra-operator-controller-manager-54ccf4f85d-gvxjc" (UID: "801f503b-36aa-441d-9487-8fedc3365d46") : secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.265071 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" podUID="34541257-bf45-41a5-a756-393b32053416" Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.265335 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" podUID="efce0465-d792-44d6-b0c6-48f3e74ccd7a" Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.265864 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" podUID="16b1155b-baa1-467b-947f-36673d41a1a0" Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.265990 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" podUID="75a3ce99-813c-45c1-9b30-0bb0b47f26a6" Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.266211 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" podUID="91102982-af50-41cf-9969-28c92bdf8f36" Jan 23 08:12:51 crc kubenswrapper[4982]: I0123 08:12:51.635274 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.635597 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:51 crc kubenswrapper[4982]: E0123 08:12:51.635759 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert podName:3d5f4f8b-84b1-45e2-b393-254c24c090a8 nodeName:}" failed. No retries permitted until 2026-01-23 08:12:55.635685964 +0000 UTC m=+1050.116049350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" (UID: "3d5f4f8b-84b1-45e2-b393-254c24c090a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:52 crc kubenswrapper[4982]: I0123 08:12:52.143171 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:52 crc kubenswrapper[4982]: I0123 08:12:52.143297 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:52 crc kubenswrapper[4982]: E0123 08:12:52.143446 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 08:12:52 crc kubenswrapper[4982]: E0123 08:12:52.143503 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:56.143485974 +0000 UTC m=+1050.623849360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "webhook-server-cert" not found Jan 23 08:12:52 crc kubenswrapper[4982]: E0123 08:12:52.143904 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 08:12:52 crc kubenswrapper[4982]: E0123 08:12:52.143936 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:12:56.143927746 +0000 UTC m=+1050.624291132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "metrics-server-cert" not found Jan 23 08:12:55 crc kubenswrapper[4982]: I0123 08:12:55.202568 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:12:55 crc kubenswrapper[4982]: E0123 08:12:55.202757 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:55 crc kubenswrapper[4982]: E0123 08:12:55.203378 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert podName:801f503b-36aa-441d-9487-8fedc3365d46 nodeName:}" failed. No retries permitted until 2026-01-23 08:13:03.203354093 +0000 UTC m=+1057.683717489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert") pod "infra-operator-controller-manager-54ccf4f85d-gvxjc" (UID: "801f503b-36aa-441d-9487-8fedc3365d46") : secret "infra-operator-webhook-server-cert" not found Jan 23 08:12:55 crc kubenswrapper[4982]: I0123 08:12:55.711270 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:12:55 crc kubenswrapper[4982]: E0123 08:12:55.711484 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:55 crc kubenswrapper[4982]: E0123 08:12:55.711532 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert podName:3d5f4f8b-84b1-45e2-b393-254c24c090a8 nodeName:}" failed. No retries permitted until 2026-01-23 08:13:03.711518202 +0000 UTC m=+1058.191881588 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" (UID: "3d5f4f8b-84b1-45e2-b393-254c24c090a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:12:56 crc kubenswrapper[4982]: I0123 08:12:56.219861 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:56 crc kubenswrapper[4982]: I0123 08:12:56.219991 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:12:56 crc kubenswrapper[4982]: E0123 08:12:56.220068 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 08:12:56 crc kubenswrapper[4982]: E0123 08:12:56.220154 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:13:04.220131174 +0000 UTC m=+1058.700494590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "webhook-server-cert" not found Jan 23 08:12:56 crc kubenswrapper[4982]: E0123 08:12:56.220176 4982 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 08:12:56 crc kubenswrapper[4982]: E0123 08:12:56.220214 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:13:04.220202346 +0000 UTC m=+1058.700565732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "metrics-server-cert" not found Jan 23 08:13:01 crc kubenswrapper[4982]: I0123 08:13:01.829161 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:13:03 crc kubenswrapper[4982]: I0123 08:13:03.242911 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:13:03 crc kubenswrapper[4982]: E0123 08:13:03.243102 4982 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 08:13:03 crc kubenswrapper[4982]: E0123 08:13:03.243181 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert podName:801f503b-36aa-441d-9487-8fedc3365d46 nodeName:}" failed. No retries permitted until 2026-01-23 08:13:19.243163364 +0000 UTC m=+1073.723526750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert") pod "infra-operator-controller-manager-54ccf4f85d-gvxjc" (UID: "801f503b-36aa-441d-9487-8fedc3365d46") : secret "infra-operator-webhook-server-cert" not found Jan 23 08:13:03 crc kubenswrapper[4982]: I0123 08:13:03.752430 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:13:03 crc kubenswrapper[4982]: E0123 08:13:03.752615 4982 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:13:03 crc kubenswrapper[4982]: E0123 08:13:03.753035 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert podName:3d5f4f8b-84b1-45e2-b393-254c24c090a8 nodeName:}" failed. No retries permitted until 2026-01-23 08:13:19.753004598 +0000 UTC m=+1074.233367994 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert") pod "openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" (UID: "3d5f4f8b-84b1-45e2-b393-254c24c090a8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 08:13:04 crc kubenswrapper[4982]: I0123 08:13:04.262067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:04 crc kubenswrapper[4982]: I0123 08:13:04.262192 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:04 crc kubenswrapper[4982]: E0123 08:13:04.262238 4982 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 08:13:04 crc kubenswrapper[4982]: E0123 08:13:04.262331 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs podName:2502f90a-1eb4-4fca-ae97-f9624f9caa2c nodeName:}" failed. No retries permitted until 2026-01-23 08:13:20.262310739 +0000 UTC m=+1074.742674125 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs") pod "openstack-operator-controller-manager-57c46955cf-9c26t" (UID: "2502f90a-1eb4-4fca-ae97-f9624f9caa2c") : secret "webhook-server-cert" not found Jan 23 08:13:04 crc kubenswrapper[4982]: I0123 08:13:04.270266 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-metrics-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:06 crc kubenswrapper[4982]: E0123 08:13:06.450398 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:e5e017be64edd679623ea1b7e6a1ae780fdcee4ef79be989b93d8c1d082da15b" Jan 23 08:13:06 crc kubenswrapper[4982]: E0123 08:13:06.450840 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:e5e017be64edd679623ea1b7e6a1ae780fdcee4ef79be989b93d8c1d082da15b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-477tm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-59dd8b7cbf-jhcmh_openstack-operators(0167eb4d-3772-402e-8451-69789e17dd5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:06 crc kubenswrapper[4982]: E0123 08:13:06.452013 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" podUID="0167eb4d-3772-402e-8451-69789e17dd5e" Jan 23 08:13:07 crc kubenswrapper[4982]: E0123 08:13:07.386571 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 23 08:13:07 crc kubenswrapper[4982]: E0123 08:13:07.386953 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hzww7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-44mmv_openstack-operators(5285fcd9-54a1-4499-8930-af660f95709a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:07 crc kubenswrapper[4982]: E0123 08:13:07.388333 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" podUID="5285fcd9-54a1-4499-8930-af660f95709a" Jan 23 08:13:07 crc kubenswrapper[4982]: E0123 08:13:07.416695 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" podUID="5285fcd9-54a1-4499-8930-af660f95709a" Jan 23 08:13:07 crc kubenswrapper[4982]: E0123 08:13:07.416852 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:e5e017be64edd679623ea1b7e6a1ae780fdcee4ef79be989b93d8c1d082da15b\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" podUID="0167eb4d-3772-402e-8451-69789e17dd5e" Jan 23 08:13:08 crc kubenswrapper[4982]: E0123 08:13:08.626271 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30" Jan 23 08:13:08 crc kubenswrapper[4982]: E0123 08:13:08.626440 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dl6cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-tt4pp_openstack-operators(854d661c-b4ec-4046-85d3-3e643336d93c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:08 crc kubenswrapper[4982]: E0123 08:13:08.627721 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" podUID="854d661c-b4ec-4046-85d3-3e643336d93c" Jan 23 08:13:09 crc kubenswrapper[4982]: E0123 08:13:09.429144 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" podUID="854d661c-b4ec-4046-85d3-3e643336d93c" Jan 23 08:13:11 crc kubenswrapper[4982]: E0123 08:13:11.211846 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 23 08:13:11 crc kubenswrapper[4982]: E0123 08:13:11.212430 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9gscc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-hfrk2_openstack-operators(6f86042f-a06a-4756-96ae-19de64621eb4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:11 crc kubenswrapper[4982]: E0123 08:13:11.213667 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" podUID="6f86042f-a06a-4756-96ae-19de64621eb4" Jan 23 08:13:11 crc kubenswrapper[4982]: E0123 08:13:11.442385 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" podUID="6f86042f-a06a-4756-96ae-19de64621eb4" Jan 23 08:13:16 crc kubenswrapper[4982]: E0123 08:13:16.253127 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 23 08:13:16 crc kubenswrapper[4982]: E0123 08:13:16.254495 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mxfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-lqf89_openstack-operators(e57a3249-8bfe-49da-88fa-dbc887422c3c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:16 crc kubenswrapper[4982]: E0123 08:13:16.256207 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" podUID="e57a3249-8bfe-49da-88fa-dbc887422c3c" Jan 23 08:13:16 crc kubenswrapper[4982]: E0123 08:13:16.480214 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" podUID="e57a3249-8bfe-49da-88fa-dbc887422c3c" Jan 23 08:13:19 crc kubenswrapper[4982]: I0123 08:13:19.257153 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:13:19 crc kubenswrapper[4982]: I0123 08:13:19.264583 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/801f503b-36aa-441d-9487-8fedc3365d46-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-gvxjc\" (UID: \"801f503b-36aa-441d-9487-8fedc3365d46\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:13:19 crc kubenswrapper[4982]: I0123 08:13:19.419246 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:13:19 crc kubenswrapper[4982]: E0123 08:13:19.462419 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 08:13:19 crc kubenswrapper[4982]: E0123 08:13:19.462677 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntvfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-4nz5s_openstack-operators(ae1bdd5c-8116-47e9-9344-f1d84794541c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:19 crc kubenswrapper[4982]: E0123 08:13:19.463920 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" podUID="ae1bdd5c-8116-47e9-9344-f1d84794541c" Jan 23 08:13:19 crc kubenswrapper[4982]: E0123 08:13:19.501430 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" podUID="ae1bdd5c-8116-47e9-9344-f1d84794541c" Jan 23 08:13:19 crc kubenswrapper[4982]: I0123 08:13:19.764939 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:13:19 crc kubenswrapper[4982]: I0123 08:13:19.768386 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3d5f4f8b-84b1-45e2-b393-254c24c090a8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn\" (UID: \"3d5f4f8b-84b1-45e2-b393-254c24c090a8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:13:19 crc kubenswrapper[4982]: E0123 08:13:19.930840 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 23 08:13:19 crc kubenswrapper[4982]: E0123 08:13:19.931074 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xq7kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-glq8j_openstack-operators(d6169561-3b43-4dcd-8fa6-f08d484bccdf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:19 crc kubenswrapper[4982]: E0123 08:13:19.932380 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" podUID="d6169561-3b43-4dcd-8fa6-f08d484bccdf" Jan 23 08:13:20 crc kubenswrapper[4982]: I0123 08:13:20.003491 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:13:20 crc kubenswrapper[4982]: I0123 08:13:20.273662 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:20 crc kubenswrapper[4982]: I0123 08:13:20.301210 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2502f90a-1eb4-4fca-ae97-f9624f9caa2c-webhook-certs\") pod \"openstack-operator-controller-manager-57c46955cf-9c26t\" (UID: \"2502f90a-1eb4-4fca-ae97-f9624f9caa2c\") " pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:20 crc kubenswrapper[4982]: I0123 08:13:20.311546 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:20 crc kubenswrapper[4982]: E0123 08:13:20.508429 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" podUID="d6169561-3b43-4dcd-8fa6-f08d484bccdf" Jan 23 08:13:20 crc kubenswrapper[4982]: E0123 08:13:20.606325 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f" Jan 23 08:13:20 crc kubenswrapper[4982]: E0123 08:13:20.606690 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vj9fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-69cf5d4557-8rr6c_openstack-operators(1be00a16-06c6-4fe5-9626-149c854680f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:20 crc kubenswrapper[4982]: E0123 08:13:20.607877 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" Jan 23 08:13:21 crc kubenswrapper[4982]: E0123 08:13:21.025102 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4" Jan 23 08:13:21 crc kubenswrapper[4982]: E0123 08:13:21.025271 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-brrnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-5xrcb_openstack-operators(a73a3899-9dfc-404d-b7cd-7eaf32de406a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:21 crc kubenswrapper[4982]: E0123 08:13:21.026439 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" podUID="a73a3899-9dfc-404d-b7cd-7eaf32de406a" Jan 23 08:13:21 crc kubenswrapper[4982]: E0123 08:13:21.527148 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:e950ac2df7be78ae0cbcf62fe12ee7a06b628f1903da6fcb741609e857eb1a7f\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" Jan 23 08:13:21 crc kubenswrapper[4982]: E0123 08:13:21.527874 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" podUID="a73a3899-9dfc-404d-b7cd-7eaf32de406a" Jan 23 08:13:26 crc kubenswrapper[4982]: E0123 08:13:26.078545 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 23 08:13:26 crc kubenswrapper[4982]: E0123 08:13:26.079408 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg479,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-gvwjl_openstack-operators(5656681c-2a30-47fa-bb61-f33660c36db0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:26 crc kubenswrapper[4982]: E0123 08:13:26.080713 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podUID="5656681c-2a30-47fa-bb61-f33660c36db0" Jan 23 08:13:26 crc kubenswrapper[4982]: E0123 08:13:26.580463 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podUID="5656681c-2a30-47fa-bb61-f33660c36db0" Jan 23 08:13:29 crc kubenswrapper[4982]: E0123 08:13:29.554657 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 23 08:13:29 crc kubenswrapper[4982]: E0123 08:13:29.555120 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xt2rf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-lcpcl_openstack-operators(cb115f06-d81c-4b67-af3f-454ffd70d358): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:29 crc kubenswrapper[4982]: E0123 08:13:29.556319 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" podUID="cb115f06-d81c-4b67-af3f-454ffd70d358" Jan 23 08:13:29 crc kubenswrapper[4982]: E0123 08:13:29.604354 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" podUID="cb115f06-d81c-4b67-af3f-454ffd70d358" Jan 23 08:13:30 crc kubenswrapper[4982]: E0123 08:13:30.470170 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 23 08:13:30 crc kubenswrapper[4982]: E0123 08:13:30.470580 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wg479,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-pm6mp_openstack-operators(a00a88ad-d184-4394-9231-4da810afad02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:30 crc kubenswrapper[4982]: E0123 08:13:30.472194 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" podUID="a00a88ad-d184-4394-9231-4da810afad02" Jan 23 08:13:30 crc kubenswrapper[4982]: E0123 08:13:30.612317 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" podUID="a00a88ad-d184-4394-9231-4da810afad02" Jan 23 08:13:35 crc kubenswrapper[4982]: E0123 08:13:35.995864 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 23 08:13:35 crc kubenswrapper[4982]: E0123 08:13:35.997092 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8g65r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-p6jkk_openstack-operators(564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:35 crc kubenswrapper[4982]: E0123 08:13:35.998863 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" podUID="564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c" Jan 23 08:13:36 crc kubenswrapper[4982]: E0123 08:13:36.627484 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 23 08:13:36 crc kubenswrapper[4982]: E0123 08:13:36.627992 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kx2s7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-r2wx4_openstack-operators(91102982-af50-41cf-9969-28c92bdf8f36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:36 crc kubenswrapper[4982]: E0123 08:13:36.629229 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" podUID="91102982-af50-41cf-9969-28c92bdf8f36" Jan 23 08:13:36 crc kubenswrapper[4982]: E0123 08:13:36.700757 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" podUID="564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c" Jan 23 08:13:37 crc kubenswrapper[4982]: E0123 08:13:37.247499 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b" Jan 23 08:13:37 crc kubenswrapper[4982]: E0123 08:13:37.247829 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-82tjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-ptg2h_openstack-operators(16b1155b-baa1-467b-947f-36673d41a1a0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:37 crc kubenswrapper[4982]: E0123 08:13:37.249040 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" podUID="16b1155b-baa1-467b-947f-36673d41a1a0" Jan 23 08:13:39 crc kubenswrapper[4982]: E0123 08:13:39.879057 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 23 08:13:39 crc kubenswrapper[4982]: E0123 08:13:39.879584 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvb2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-q4hlv_openstack-operators(efce0465-d792-44d6-b0c6-48f3e74ccd7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:39 crc kubenswrapper[4982]: E0123 08:13:39.880913 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" podUID="efce0465-d792-44d6-b0c6-48f3e74ccd7a" Jan 23 08:13:40 crc kubenswrapper[4982]: E0123 08:13:40.324357 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 23 08:13:40 crc kubenswrapper[4982]: E0123 08:13:40.324587 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v76v4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-mwfhl_openstack-operators(75a3ce99-813c-45c1-9b30-0bb0b47f26a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:40 crc kubenswrapper[4982]: E0123 08:13:40.325768 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" podUID="75a3ce99-813c-45c1-9b30-0bb0b47f26a6" Jan 23 08:13:42 crc kubenswrapper[4982]: E0123 08:13:42.105812 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 08:13:42 crc kubenswrapper[4982]: E0123 08:13:42.106352 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btbgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-dddts_openstack-operators(34541257-bf45-41a5-a756-393b32053416): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:42 crc kubenswrapper[4982]: E0123 08:13:42.107569 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" podUID="34541257-bf45-41a5-a756-393b32053416" Jan 23 08:13:42 crc kubenswrapper[4982]: E0123 08:13:42.598793 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 08:13:42 crc kubenswrapper[4982]: E0123 08:13:42.598952 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2ndc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-mwfjx_openstack-operators(9695f150-036e-4e4b-a857-7af97f1576e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:42 crc kubenswrapper[4982]: E0123 08:13:42.600099 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" podUID="9695f150-036e-4e4b-a857-7af97f1576e5" Jan 23 08:13:42 crc kubenswrapper[4982]: E0123 08:13:42.715041 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" podUID="9695f150-036e-4e4b-a857-7af97f1576e5" Jan 23 08:13:43 crc kubenswrapper[4982]: E0123 08:13:43.137066 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 08:13:43 crc kubenswrapper[4982]: E0123 08:13:43.137481 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q87h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nc6w7_openstack-operators(96c1e2e5-87b0-4341-83cd-5b623de95215): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:13:43 crc kubenswrapper[4982]: E0123 08:13:43.138792 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" podUID="96c1e2e5-87b0-4341-83cd-5b623de95215" Jan 23 08:13:43 crc kubenswrapper[4982]: I0123 08:13:43.715229 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc"] Jan 23 08:13:43 crc kubenswrapper[4982]: E0123 08:13:43.724465 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" podUID="96c1e2e5-87b0-4341-83cd-5b623de95215" Jan 23 08:13:43 crc kubenswrapper[4982]: W0123 08:13:43.738094 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod801f503b_36aa_441d_9487_8fedc3365d46.slice/crio-19d6fd783cd3256117361bb7ff9fbf5fc6520a68838cd8019ca6b86362ae01eb WatchSource:0}: Error finding container 19d6fd783cd3256117361bb7ff9fbf5fc6520a68838cd8019ca6b86362ae01eb: Status 404 returned error can't find the container with id 19d6fd783cd3256117361bb7ff9fbf5fc6520a68838cd8019ca6b86362ae01eb Jan 23 08:13:43 crc kubenswrapper[4982]: I0123 08:13:43.800311 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn"] Jan 23 08:13:43 crc kubenswrapper[4982]: W0123 08:13:43.800821 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d5f4f8b_84b1_45e2_b393_254c24c090a8.slice/crio-174358c4957e97b886d31c93eaaed25d13c2267df62a026593af3d730948bfd7 WatchSource:0}: Error finding container 174358c4957e97b886d31c93eaaed25d13c2267df62a026593af3d730948bfd7: Status 404 returned error can't find the container with id 174358c4957e97b886d31c93eaaed25d13c2267df62a026593af3d730948bfd7 Jan 23 08:13:43 crc kubenswrapper[4982]: I0123 08:13:43.877292 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t"] Jan 23 08:13:43 crc kubenswrapper[4982]: W0123 08:13:43.879502 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2502f90a_1eb4_4fca_ae97_f9624f9caa2c.slice/crio-1992d50c5b5caaeb2c1455ac8e889e68bed09aa844de947abd33104968e6b520 WatchSource:0}: Error finding container 1992d50c5b5caaeb2c1455ac8e889e68bed09aa844de947abd33104968e6b520: Status 404 returned error can't find the container with id 1992d50c5b5caaeb2c1455ac8e889e68bed09aa844de947abd33104968e6b520 Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.750739 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" event={"ID":"854d661c-b4ec-4046-85d3-3e643336d93c","Type":"ContainerStarted","Data":"472854d5b241b0ce1bd053b1d9e79f97f15abe6c479daafa6741f442f997320b"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.751238 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.758160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" event={"ID":"e57a3249-8bfe-49da-88fa-dbc887422c3c","Type":"ContainerStarted","Data":"8b6c2090d6b4c49aef4b7678cc35a9368dc1fdb97a3e8dadc3f3977a17fbac95"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.759046 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.765527 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" event={"ID":"801f503b-36aa-441d-9487-8fedc3365d46","Type":"ContainerStarted","Data":"19d6fd783cd3256117361bb7ff9fbf5fc6520a68838cd8019ca6b86362ae01eb"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.767119 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" event={"ID":"1be00a16-06c6-4fe5-9626-149c854680f8","Type":"ContainerStarted","Data":"b21b69f7dab970c6843c5d9d0bb498c096d274449c558fb9fd5e2e6650d2bd0d"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.767850 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.769687 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" event={"ID":"a73a3899-9dfc-404d-b7cd-7eaf32de406a","Type":"ContainerStarted","Data":"bd56a65ae1110d9611c5cbc82ee818907c8e2dc8ed167c776b52aa696cb3fd7d"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.769847 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.775471 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" event={"ID":"3d5f4f8b-84b1-45e2-b393-254c24c090a8","Type":"ContainerStarted","Data":"174358c4957e97b886d31c93eaaed25d13c2267df62a026593af3d730948bfd7"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.776422 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" podStartSLOduration=3.384546919 podStartE2EDuration="57.776385839s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.787348957 +0000 UTC m=+1043.267712343" lastFinishedPulling="2026-01-23 08:13:43.179187877 +0000 UTC m=+1097.659551263" observedRunningTime="2026-01-23 08:13:44.772374743 +0000 UTC m=+1099.252738129" watchObservedRunningTime="2026-01-23 08:13:44.776385839 +0000 UTC m=+1099.256749225" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.780089 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" event={"ID":"5656681c-2a30-47fa-bb61-f33660c36db0","Type":"ContainerStarted","Data":"f27e8e6554e0615dffbb2d4b0bd356f8a070cef76d19e49845ff8569aa631720"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.781026 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.786643 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" event={"ID":"cb115f06-d81c-4b67-af3f-454ffd70d358","Type":"ContainerStarted","Data":"eef3a39fc253ba024fd9776db7db13d69b07e067311447d805974100648e444a"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.787541 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.798404 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" event={"ID":"0167eb4d-3772-402e-8451-69789e17dd5e","Type":"ContainerStarted","Data":"f0760ba7e571dcdd0924f2284e78fe2aadc6da1c9702d7066985346aa0a2c0b6"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.799372 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.806559 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" event={"ID":"2502f90a-1eb4-4fca-ae97-f9624f9caa2c","Type":"ContainerStarted","Data":"34b006a929b40c990bb3515900d72d84fcc8c21077438f35d0102a99aafc6525"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.806616 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" event={"ID":"2502f90a-1eb4-4fca-ae97-f9624f9caa2c","Type":"ContainerStarted","Data":"1992d50c5b5caaeb2c1455ac8e889e68bed09aa844de947abd33104968e6b520"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.807486 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.814918 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" event={"ID":"5285fcd9-54a1-4499-8930-af660f95709a","Type":"ContainerStarted","Data":"da3faf8132437e88a816a27305df57306ff28535446c1a3029a9d39650250fa3"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.815879 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.819151 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" event={"ID":"ae1bdd5c-8116-47e9-9344-f1d84794541c","Type":"ContainerStarted","Data":"a8db99c5ff2a434069da212522ae81671be16c37a83057df980b0a2fafc51e3f"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.819814 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.831041 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" event={"ID":"d6169561-3b43-4dcd-8fa6-f08d484bccdf","Type":"ContainerStarted","Data":"c24c8f6f5b2a928939a24c445396c0c24594d0771c6d8714fb5985f743b2bc61"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.831462 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.845990 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" podStartSLOduration=3.738036121 podStartE2EDuration="57.845932557s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.117184224 +0000 UTC m=+1043.597547610" lastFinishedPulling="2026-01-23 08:13:43.22508067 +0000 UTC m=+1097.705444046" observedRunningTime="2026-01-23 08:13:44.831088855 +0000 UTC m=+1099.311452241" watchObservedRunningTime="2026-01-23 08:13:44.845932557 +0000 UTC m=+1099.326295943" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.852668 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" event={"ID":"6f86042f-a06a-4756-96ae-19de64621eb4","Type":"ContainerStarted","Data":"960ef2c3a886e966eaf15d62911feaab916222c6a76c7db33ddb9aefcbafda49"} Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.853146 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.937215 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" podStartSLOduration=3.878432462 podStartE2EDuration="57.937194799s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.177693323 +0000 UTC m=+1043.658056709" lastFinishedPulling="2026-01-23 08:13:43.23645566 +0000 UTC m=+1097.716819046" observedRunningTime="2026-01-23 08:13:44.879972766 +0000 UTC m=+1099.360336152" watchObservedRunningTime="2026-01-23 08:13:44.937194799 +0000 UTC m=+1099.417558185" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.938208 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podStartSLOduration=3.36038808 podStartE2EDuration="57.938200425s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.650578762 +0000 UTC m=+1043.130942148" lastFinishedPulling="2026-01-23 08:13:43.228391107 +0000 UTC m=+1097.708754493" observedRunningTime="2026-01-23 08:13:44.936079959 +0000 UTC m=+1099.416443355" watchObservedRunningTime="2026-01-23 08:13:44.938200425 +0000 UTC m=+1099.418563821" Jan 23 08:13:44 crc kubenswrapper[4982]: I0123 08:13:44.991495 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" podStartSLOduration=57.991476993 podStartE2EDuration="57.991476993s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:13:44.978666445 +0000 UTC m=+1099.459029831" watchObservedRunningTime="2026-01-23 08:13:44.991476993 +0000 UTC m=+1099.471840379" Jan 23 08:13:45 crc kubenswrapper[4982]: I0123 08:13:45.052553 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" podStartSLOduration=3.949009547 podStartE2EDuration="58.052533007s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.122985697 +0000 UTC m=+1043.603349083" lastFinishedPulling="2026-01-23 08:13:43.226509157 +0000 UTC m=+1097.706872543" observedRunningTime="2026-01-23 08:13:45.049961319 +0000 UTC m=+1099.530324705" watchObservedRunningTime="2026-01-23 08:13:45.052533007 +0000 UTC m=+1099.532896393" Jan 23 08:13:45 crc kubenswrapper[4982]: I0123 08:13:45.143744 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" podStartSLOduration=3.568323805 podStartE2EDuration="58.143727957s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.648706002 +0000 UTC m=+1043.129069388" lastFinishedPulling="2026-01-23 08:13:43.224110154 +0000 UTC m=+1097.704473540" observedRunningTime="2026-01-23 08:13:45.143103041 +0000 UTC m=+1099.623466427" watchObservedRunningTime="2026-01-23 08:13:45.143727957 +0000 UTC m=+1099.624091343" Jan 23 08:13:45 crc kubenswrapper[4982]: I0123 08:13:45.232189 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" podStartSLOduration=3.94460634 podStartE2EDuration="58.232151504s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.930281284 +0000 UTC m=+1043.410644670" lastFinishedPulling="2026-01-23 08:13:43.217826448 +0000 UTC m=+1097.698189834" observedRunningTime="2026-01-23 08:13:45.231008374 +0000 UTC m=+1099.711371760" watchObservedRunningTime="2026-01-23 08:13:45.232151504 +0000 UTC m=+1099.712514900" Jan 23 08:13:45 crc kubenswrapper[4982]: I0123 08:13:45.241909 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" podStartSLOduration=3.966496028 podStartE2EDuration="58.241888771s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.949663466 +0000 UTC m=+1043.430026852" lastFinishedPulling="2026-01-23 08:13:43.225056209 +0000 UTC m=+1097.705419595" observedRunningTime="2026-01-23 08:13:45.197901949 +0000 UTC m=+1099.678265335" watchObservedRunningTime="2026-01-23 08:13:45.241888771 +0000 UTC m=+1099.722252157" Jan 23 08:13:45 crc kubenswrapper[4982]: I0123 08:13:45.299211 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podStartSLOduration=3.875448012 podStartE2EDuration="58.299192596s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.802782424 +0000 UTC m=+1043.283145810" lastFinishedPulling="2026-01-23 08:13:43.226527008 +0000 UTC m=+1097.706890394" observedRunningTime="2026-01-23 08:13:45.281118758 +0000 UTC m=+1099.761482154" watchObservedRunningTime="2026-01-23 08:13:45.299192596 +0000 UTC m=+1099.779555982" Jan 23 08:13:45 crc kubenswrapper[4982]: I0123 08:13:45.341063 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" podStartSLOduration=3.892739699 podStartE2EDuration="58.341041161s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.776731676 +0000 UTC m=+1043.257095052" lastFinishedPulling="2026-01-23 08:13:43.225033128 +0000 UTC m=+1097.705396514" observedRunningTime="2026-01-23 08:13:45.323920188 +0000 UTC m=+1099.804283564" watchObservedRunningTime="2026-01-23 08:13:45.341041161 +0000 UTC m=+1099.821404547" Jan 23 08:13:45 crc kubenswrapper[4982]: I0123 08:13:45.361020 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" podStartSLOduration=3.620953285 podStartE2EDuration="58.360998458s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.438804215 +0000 UTC m=+1042.919167591" lastFinishedPulling="2026-01-23 08:13:43.178849378 +0000 UTC m=+1097.659212764" observedRunningTime="2026-01-23 08:13:45.348832747 +0000 UTC m=+1099.829196143" watchObservedRunningTime="2026-01-23 08:13:45.360998458 +0000 UTC m=+1099.841361844" Jan 23 08:13:48 crc kubenswrapper[4982]: E0123 08:13:48.828167 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" podUID="91102982-af50-41cf-9969-28c92bdf8f36" Jan 23 08:13:49 crc kubenswrapper[4982]: E0123 08:13:49.828902 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" podUID="16b1155b-baa1-467b-947f-36673d41a1a0" Jan 23 08:13:50 crc kubenswrapper[4982]: I0123 08:13:50.318046 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 08:13:51 crc kubenswrapper[4982]: E0123 08:13:51.959879 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" podUID="efce0465-d792-44d6-b0c6-48f3e74ccd7a" Jan 23 08:13:53 crc kubenswrapper[4982]: I0123 08:13:53.952779 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" event={"ID":"a00a88ad-d184-4394-9231-4da810afad02","Type":"ContainerStarted","Data":"0b14f824a8767c34446eb86fafdee61698bc38081e3c7c08246a56a30ee93d2d"} Jan 23 08:13:53 crc kubenswrapper[4982]: I0123 08:13:53.953637 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 08:13:53 crc kubenswrapper[4982]: I0123 08:13:53.974552 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" event={"ID":"3d5f4f8b-84b1-45e2-b393-254c24c090a8","Type":"ContainerStarted","Data":"db96beddac2c035d87f497ee12c41625cd44ebc5c15b9b4a958cd153d6dbd054"} Jan 23 08:13:53 crc kubenswrapper[4982]: I0123 08:13:53.974644 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:13:53 crc kubenswrapper[4982]: I0123 08:13:53.976141 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" podStartSLOduration=7.448916763 podStartE2EDuration="1m6.976123672s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:48.938143982 +0000 UTC m=+1043.418507368" lastFinishedPulling="2026-01-23 08:13:48.465350891 +0000 UTC m=+1102.945714277" observedRunningTime="2026-01-23 08:13:53.971013927 +0000 UTC m=+1108.451377323" watchObservedRunningTime="2026-01-23 08:13:53.976123672 +0000 UTC m=+1108.456487068" Jan 23 08:13:53 crc kubenswrapper[4982]: I0123 08:13:53.976297 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" event={"ID":"801f503b-36aa-441d-9487-8fedc3365d46","Type":"ContainerStarted","Data":"db81386d2122947b33f879d31c408b79aee6c4fa677223397b0be2da2e761c46"} Jan 23 08:13:53 crc kubenswrapper[4982]: I0123 08:13:53.976458 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:13:54 crc kubenswrapper[4982]: I0123 08:13:54.015764 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" podStartSLOduration=58.811510834 podStartE2EDuration="1m7.015743379s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:13:43.803903127 +0000 UTC m=+1098.284266513" lastFinishedPulling="2026-01-23 08:13:52.008135672 +0000 UTC m=+1106.488499058" observedRunningTime="2026-01-23 08:13:54.006442973 +0000 UTC m=+1108.486806359" watchObservedRunningTime="2026-01-23 08:13:54.015743379 +0000 UTC m=+1108.496106765" Jan 23 08:13:54 crc kubenswrapper[4982]: I0123 08:13:54.026654 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" podStartSLOduration=58.774722252 podStartE2EDuration="1m7.026638487s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:13:43.740553453 +0000 UTC m=+1098.220916839" lastFinishedPulling="2026-01-23 08:13:51.992469688 +0000 UTC m=+1106.472833074" observedRunningTime="2026-01-23 08:13:54.02067923 +0000 UTC m=+1108.501042616" watchObservedRunningTime="2026-01-23 08:13:54.026638487 +0000 UTC m=+1108.507001873" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.463245 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.492010 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.535722 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.596030 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.605320 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.711550 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.806550 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 08:13:57 crc kubenswrapper[4982]: I0123 08:13:57.839074 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 08:13:58 crc kubenswrapper[4982]: I0123 08:13:58.016546 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 08:13:58 crc kubenswrapper[4982]: I0123 08:13:58.054333 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 08:13:58 crc kubenswrapper[4982]: I0123 08:13:58.368163 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 08:13:58 crc kubenswrapper[4982]: E0123 08:13:58.790128 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" podUID="75a3ce99-813c-45c1-9b30-0bb0b47f26a6" Jan 23 08:13:58 crc kubenswrapper[4982]: E0123 08:13:58.917792 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" podUID="34541257-bf45-41a5-a756-393b32053416" Jan 23 08:13:59 crc kubenswrapper[4982]: I0123 08:13:59.425430 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 08:14:00 crc kubenswrapper[4982]: I0123 08:14:00.009413 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 08:14:00 crc kubenswrapper[4982]: I0123 08:14:00.040950 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" event={"ID":"96c1e2e5-87b0-4341-83cd-5b623de95215","Type":"ContainerStarted","Data":"ab0aad75ebc90beb32455ba6de75406bcb427e96c10e149ce22e9b6f3e6104d7"} Jan 23 08:14:00 crc kubenswrapper[4982]: I0123 08:14:00.044812 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" event={"ID":"564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c","Type":"ContainerStarted","Data":"db28aaa642def327de785e1eeadefcc3ddb57db6f55f6315a4de92785cb8783c"} Jan 23 08:14:00 crc kubenswrapper[4982]: I0123 08:14:00.045425 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 08:14:00 crc kubenswrapper[4982]: I0123 08:14:00.069816 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" podStartSLOduration=3.447581434 podStartE2EDuration="1m13.069788438s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.297743036 +0000 UTC m=+1043.778106422" lastFinishedPulling="2026-01-23 08:13:58.91995004 +0000 UTC m=+1113.400313426" observedRunningTime="2026-01-23 08:14:00.061869139 +0000 UTC m=+1114.542232525" watchObservedRunningTime="2026-01-23 08:14:00.069788438 +0000 UTC m=+1114.550151854" Jan 23 08:14:01 crc kubenswrapper[4982]: I0123 08:14:01.059063 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" event={"ID":"9695f150-036e-4e4b-a857-7af97f1576e5","Type":"ContainerStarted","Data":"16645ed897231cac65f1c6458ea400c2e4f37c6cfcbf6ea20c16e3910bd54665"} Jan 23 08:14:01 crc kubenswrapper[4982]: I0123 08:14:01.059480 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 08:14:01 crc kubenswrapper[4982]: I0123 08:14:01.080193 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" podStartSLOduration=2.809331915 podStartE2EDuration="1m13.080175451s" podCreationTimestamp="2026-01-23 08:12:48 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.450719769 +0000 UTC m=+1043.931083155" lastFinishedPulling="2026-01-23 08:13:59.721563305 +0000 UTC m=+1114.201926691" observedRunningTime="2026-01-23 08:14:01.07522784 +0000 UTC m=+1115.555591226" watchObservedRunningTime="2026-01-23 08:14:01.080175451 +0000 UTC m=+1115.560538827" Jan 23 08:14:01 crc kubenswrapper[4982]: I0123 08:14:01.098094 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" podStartSLOduration=3.283340153 podStartE2EDuration="1m14.098062084s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.119261188 +0000 UTC m=+1043.599624564" lastFinishedPulling="2026-01-23 08:13:59.933983109 +0000 UTC m=+1114.414346495" observedRunningTime="2026-01-23 08:14:01.09221908 +0000 UTC m=+1115.572582466" watchObservedRunningTime="2026-01-23 08:14:01.098062084 +0000 UTC m=+1115.578425480" Jan 23 08:14:02 crc kubenswrapper[4982]: I0123 08:14:02.067676 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" event={"ID":"91102982-af50-41cf-9969-28c92bdf8f36","Type":"ContainerStarted","Data":"6c656c838f43d3b5711e71c1a34a758f7a7ef7e0dda96f4a8b1dec98fc7fc1c3"} Jan 23 08:14:02 crc kubenswrapper[4982]: I0123 08:14:02.068196 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 08:14:02 crc kubenswrapper[4982]: I0123 08:14:02.091148 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" podStartSLOduration=2.930976652 podStartE2EDuration="1m15.09112908s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.30621801 +0000 UTC m=+1043.786581406" lastFinishedPulling="2026-01-23 08:14:01.466370448 +0000 UTC m=+1115.946733834" observedRunningTime="2026-01-23 08:14:02.087079453 +0000 UTC m=+1116.567442839" watchObservedRunningTime="2026-01-23 08:14:02.09112908 +0000 UTC m=+1116.571492466" Jan 23 08:14:03 crc kubenswrapper[4982]: I0123 08:14:03.079481 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" event={"ID":"16b1155b-baa1-467b-947f-36673d41a1a0","Type":"ContainerStarted","Data":"0288c7c23107b9a1b8c6882c416553bc8c31170049feaa4b52d9c97974df4149"} Jan 23 08:14:03 crc kubenswrapper[4982]: I0123 08:14:03.080079 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 08:14:03 crc kubenswrapper[4982]: I0123 08:14:03.103827 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" podStartSLOduration=3.1189589189999998 podStartE2EDuration="1m16.103802943s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.311115199 +0000 UTC m=+1043.791478585" lastFinishedPulling="2026-01-23 08:14:02.295959223 +0000 UTC m=+1116.776322609" observedRunningTime="2026-01-23 08:14:03.098263647 +0000 UTC m=+1117.578627033" watchObservedRunningTime="2026-01-23 08:14:03.103802943 +0000 UTC m=+1117.584166339" Jan 23 08:14:06 crc kubenswrapper[4982]: I0123 08:14:06.102807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" event={"ID":"efce0465-d792-44d6-b0c6-48f3e74ccd7a","Type":"ContainerStarted","Data":"ecf97bf1bac39b955ab576a6e10f77f3a3a665795334c4ca1ab24c4461a9a98f"} Jan 23 08:14:06 crc kubenswrapper[4982]: I0123 08:14:06.103577 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 08:14:06 crc kubenswrapper[4982]: I0123 08:14:06.125555 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" podStartSLOduration=3.190380708 podStartE2EDuration="1m19.125533793s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.312205398 +0000 UTC m=+1043.792568784" lastFinishedPulling="2026-01-23 08:14:05.247358323 +0000 UTC m=+1119.727721869" observedRunningTime="2026-01-23 08:14:06.118322152 +0000 UTC m=+1120.598685548" watchObservedRunningTime="2026-01-23 08:14:06.125533793 +0000 UTC m=+1120.605897179" Jan 23 08:14:07 crc kubenswrapper[4982]: I0123 08:14:07.503315 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 08:14:08 crc kubenswrapper[4982]: I0123 08:14:08.065574 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 08:14:08 crc kubenswrapper[4982]: I0123 08:14:08.236716 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 08:14:08 crc kubenswrapper[4982]: I0123 08:14:08.457349 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 08:14:08 crc kubenswrapper[4982]: I0123 08:14:08.498727 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 08:14:13 crc kubenswrapper[4982]: I0123 08:14:13.158233 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" event={"ID":"34541257-bf45-41a5-a756-393b32053416","Type":"ContainerStarted","Data":"94b99109dafbff61442584be6d7dea03989a28dcf961f0ef8b583b62f6f8ad4a"} Jan 23 08:14:13 crc kubenswrapper[4982]: I0123 08:14:13.160029 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 08:14:13 crc kubenswrapper[4982]: I0123 08:14:13.180505 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" podStartSLOduration=3.122338018 podStartE2EDuration="1m26.180480224s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.297760286 +0000 UTC m=+1043.778123672" lastFinishedPulling="2026-01-23 08:14:12.355902492 +0000 UTC m=+1126.836265878" observedRunningTime="2026-01-23 08:14:13.174821995 +0000 UTC m=+1127.655185391" watchObservedRunningTime="2026-01-23 08:14:13.180480224 +0000 UTC m=+1127.660843620" Jan 23 08:14:14 crc kubenswrapper[4982]: I0123 08:14:14.168832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" event={"ID":"75a3ce99-813c-45c1-9b30-0bb0b47f26a6","Type":"ContainerStarted","Data":"2de6f1cd4bfc6b7c28f405915dfa31e90d52a2f370939b0050b33e048e45b579"} Jan 23 08:14:14 crc kubenswrapper[4982]: I0123 08:14:14.169657 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 08:14:14 crc kubenswrapper[4982]: I0123 08:14:14.190215 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" podStartSLOduration=3.192362829 podStartE2EDuration="1m27.190195279s" podCreationTimestamp="2026-01-23 08:12:47 +0000 UTC" firstStartedPulling="2026-01-23 08:12:49.313757249 +0000 UTC m=+1043.794120635" lastFinishedPulling="2026-01-23 08:14:13.311589699 +0000 UTC m=+1127.791953085" observedRunningTime="2026-01-23 08:14:14.190010404 +0000 UTC m=+1128.670373790" watchObservedRunningTime="2026-01-23 08:14:14.190195279 +0000 UTC m=+1128.670558675" Jan 23 08:14:18 crc kubenswrapper[4982]: I0123 08:14:18.091889 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 08:14:18 crc kubenswrapper[4982]: I0123 08:14:18.302588 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 08:14:18 crc kubenswrapper[4982]: I0123 08:14:18.407429 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.456947 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-tvbwr"] Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.460021 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.463530 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.463862 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.464034 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-k6vfd" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.464194 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.476214 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-tvbwr"] Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.517636 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jpxpt"] Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.519189 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.521188 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.533211 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jpxpt"] Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.619284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-config\") pod \"dnsmasq-dns-84bb9d8bd9-tvbwr\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.619348 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2jpg\" (UniqueName: \"kubernetes.io/projected/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-kube-api-access-t2jpg\") pod \"dnsmasq-dns-84bb9d8bd9-tvbwr\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.619370 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsknm\" (UniqueName: \"kubernetes.io/projected/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-kube-api-access-bsknm\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.619637 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-dns-svc\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.619734 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-config\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.721024 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-config\") pod \"dnsmasq-dns-84bb9d8bd9-tvbwr\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.721090 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2jpg\" (UniqueName: \"kubernetes.io/projected/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-kube-api-access-t2jpg\") pod \"dnsmasq-dns-84bb9d8bd9-tvbwr\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.721114 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsknm\" (UniqueName: \"kubernetes.io/projected/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-kube-api-access-bsknm\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.721161 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-dns-svc\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.721197 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-config\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.722097 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-config\") pod \"dnsmasq-dns-84bb9d8bd9-tvbwr\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.722200 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-config\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.722260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-dns-svc\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.743476 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsknm\" (UniqueName: \"kubernetes.io/projected/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-kube-api-access-bsknm\") pod \"dnsmasq-dns-5f854695bc-jpxpt\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.743475 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2jpg\" (UniqueName: \"kubernetes.io/projected/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-kube-api-access-t2jpg\") pod \"dnsmasq-dns-84bb9d8bd9-tvbwr\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.780435 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:14:33 crc kubenswrapper[4982]: I0123 08:14:33.836971 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:14:34 crc kubenswrapper[4982]: I0123 08:14:34.312673 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-tvbwr"] Jan 23 08:14:34 crc kubenswrapper[4982]: I0123 08:14:34.326856 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" event={"ID":"fb2a8db8-862c-4550-a240-6b3a0fb03f5a","Type":"ContainerStarted","Data":"557a461974a9a3735f2a2e0ae4c32262d62f44368902283b3074db87073d02f6"} Jan 23 08:14:34 crc kubenswrapper[4982]: I0123 08:14:34.400205 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jpxpt"] Jan 23 08:14:34 crc kubenswrapper[4982]: W0123 08:14:34.401117 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73f39c4b_d9c0_45e4_ab2c_c5b36c4189bf.slice/crio-01ec48c1682bcb55914936c58df9d1c93742e558a94aeae3c52818df25372766 WatchSource:0}: Error finding container 01ec48c1682bcb55914936c58df9d1c93742e558a94aeae3c52818df25372766: Status 404 returned error can't find the container with id 01ec48c1682bcb55914936c58df9d1c93742e558a94aeae3c52818df25372766 Jan 23 08:14:35 crc kubenswrapper[4982]: I0123 08:14:35.348229 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" event={"ID":"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf","Type":"ContainerStarted","Data":"01ec48c1682bcb55914936c58df9d1c93742e558a94aeae3c52818df25372766"} Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.314936 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jpxpt"] Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.362850 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-v6xt9"] Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.370317 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.378815 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.378964 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmnkb\" (UniqueName: \"kubernetes.io/projected/1d4dfa64-9515-462d-a0dc-e9a61b95e022-kube-api-access-wmnkb\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.379140 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-config\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.393576 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-v6xt9"] Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.480291 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.480352 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmnkb\" (UniqueName: \"kubernetes.io/projected/1d4dfa64-9515-462d-a0dc-e9a61b95e022-kube-api-access-wmnkb\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.480407 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-config\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.481418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-config\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.483056 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.514074 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmnkb\" (UniqueName: \"kubernetes.io/projected/1d4dfa64-9515-462d-a0dc-e9a61b95e022-kube-api-access-wmnkb\") pod \"dnsmasq-dns-c7cbb8f79-v6xt9\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.687039 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-tvbwr"] Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.708832 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.752263 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-qv9kx"] Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.758531 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.780863 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-qv9kx"] Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.795555 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzv4k\" (UniqueName: \"kubernetes.io/projected/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-kube-api-access-xzv4k\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.795774 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.795824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-config\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.897072 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.897121 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-config\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.897198 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzv4k\" (UniqueName: \"kubernetes.io/projected/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-kube-api-access-xzv4k\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.898540 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-dns-svc\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.898799 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-config\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:36 crc kubenswrapper[4982]: I0123 08:14:36.956504 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzv4k\" (UniqueName: \"kubernetes.io/projected/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-kube-api-access-xzv4k\") pod \"dnsmasq-dns-95f5f6995-qv9kx\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.136941 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.541774 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.546857 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.552947 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.552976 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.552893 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.553014 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.552950 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.553386 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.562181 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7fp2f" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.562344 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.646921 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-v6xt9"] Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.720186 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.720269 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.720565 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.720742 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.720916 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b4b8957-4a36-4d8b-a591-fdcbe696c808-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.721132 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.721160 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.721254 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.721305 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.721367 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b4b8957-4a36-4d8b-a591-fdcbe696c808-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.721468 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcw2n\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-kube-api-access-wcw2n\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.822838 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.822928 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b4b8957-4a36-4d8b-a591-fdcbe696c808-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.822984 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823000 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823028 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823046 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b4b8957-4a36-4d8b-a591-fdcbe696c808-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823086 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcw2n\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-kube-api-access-wcw2n\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823120 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823154 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.823182 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.824178 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.826178 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.826636 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.829006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.831512 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.831542 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.832148 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.832331 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.834669 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b4b8957-4a36-4d8b-a591-fdcbe696c808-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.835460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b4b8957-4a36-4d8b-a591-fdcbe696c808-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.849083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcw2n\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-kube-api-access-wcw2n\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.871074 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.885068 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.890674 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.893794 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.894167 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.894544 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.894592 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.894734 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-j69fj" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.894835 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.896692 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.903843 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-qv9kx"] Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.915644 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:14:37 crc kubenswrapper[4982]: I0123 08:14:37.919388 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.026932 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027011 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027052 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027083 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027196 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmrz\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-kube-api-access-lfmrz\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027268 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027300 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027428 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26d23dde-8aac-481b-bfdb-ce3315166bc4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.027452 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26d23dde-8aac-481b-bfdb-ce3315166bc4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.129426 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.129807 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.129835 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.129857 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.129949 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.129973 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfmrz\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-kube-api-access-lfmrz\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.130013 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.130041 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.130121 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.130142 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26d23dde-8aac-481b-bfdb-ce3315166bc4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.130159 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26d23dde-8aac-481b-bfdb-ce3315166bc4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.131094 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.132689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.133056 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.133232 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.133272 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.133639 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.138409 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.142263 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26d23dde-8aac-481b-bfdb-ce3315166bc4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.145686 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.148341 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26d23dde-8aac-481b-bfdb-ce3315166bc4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.149297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfmrz\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-kube-api-access-lfmrz\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.182182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.290761 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.412589 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" event={"ID":"1d4dfa64-9515-462d-a0dc-e9a61b95e022","Type":"ContainerStarted","Data":"4bc94ba765678d55cad432371cf10aea237c9bdaf276d57af27e8f4d8afef371"} Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.414763 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" event={"ID":"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0","Type":"ContainerStarted","Data":"16c24ae706c9a3c9d2693301483ee9689a581bbeeec35fef233af8920ae17e04"} Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.509435 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 08:14:38 crc kubenswrapper[4982]: I0123 08:14:38.936778 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.161867 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.164801 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.170962 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.171510 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.177097 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9stb6" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.184111 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.186112 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.190075 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257559 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4741be-bf98-457a-9275-45405ac3f257-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257611 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257724 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-kolla-config\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257740 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-config-data-default\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257777 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z7c2\" (UniqueName: \"kubernetes.io/projected/4f4741be-bf98-457a-9275-45405ac3f257-kube-api-access-2z7c2\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257800 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257819 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.257839 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359159 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z7c2\" (UniqueName: \"kubernetes.io/projected/4f4741be-bf98-457a-9275-45405ac3f257-kube-api-access-2z7c2\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359260 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359295 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4741be-bf98-457a-9275-45405ac3f257-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359389 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-config-data-default\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.359408 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-kolla-config\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.360186 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4741be-bf98-457a-9275-45405ac3f257-config-data-generated\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.360311 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-kolla-config\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.360485 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.360755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-config-data-default\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.361987 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-operator-scripts\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.365493 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.366404 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.407356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.415801 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z7c2\" (UniqueName: \"kubernetes.io/projected/4f4741be-bf98-457a-9275-45405ac3f257-kube-api-access-2z7c2\") pod \"openstack-galera-0\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.451751 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26d23dde-8aac-481b-bfdb-ce3315166bc4","Type":"ContainerStarted","Data":"966fb840755e0d2c7ff72877b3911b657707a9d8e4bf466fe4bf0577d059b908"} Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.453929 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b4b8957-4a36-4d8b-a591-fdcbe696c808","Type":"ContainerStarted","Data":"882c198d7bc6a3775ed97b448804f1092ce6195bcc3449f5f693c80ddce3c567"} Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.546042 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 08:14:39 crc kubenswrapper[4982]: I0123 08:14:39.960136 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 08:14:40 crc kubenswrapper[4982]: W0123 08:14:40.031022 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f4741be_bf98_457a_9275_45405ac3f257.slice/crio-50a143aee06109d77c7bc62d3104a6e558a0d8940a71451cda8fb11cf0afb120 WatchSource:0}: Error finding container 50a143aee06109d77c7bc62d3104a6e558a0d8940a71451cda8fb11cf0afb120: Status 404 returned error can't find the container with id 50a143aee06109d77c7bc62d3104a6e558a0d8940a71451cda8fb11cf0afb120 Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.319600 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.324680 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.330798 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.331384 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.331555 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9tqch" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.331742 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.369466 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388155 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be869517-a530-472f-b11e-59558aafc066-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388224 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388252 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4ck\" (UniqueName: \"kubernetes.io/projected/be869517-a530-472f-b11e-59558aafc066-kube-api-access-5m4ck\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388319 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388356 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388374 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.388406 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.486223 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4f4741be-bf98-457a-9275-45405ac3f257","Type":"ContainerStarted","Data":"50a143aee06109d77c7bc62d3104a6e558a0d8940a71451cda8fb11cf0afb120"} Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490739 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be869517-a530-472f-b11e-59558aafc066-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490802 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490827 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m4ck\" (UniqueName: \"kubernetes.io/projected/be869517-a530-472f-b11e-59558aafc066-kube-api-access-5m4ck\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490866 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490923 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.490982 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.491906 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.492166 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be869517-a530-472f-b11e-59558aafc066-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.493644 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.494365 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.494963 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.501140 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.502847 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.509855 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m4ck\" (UniqueName: \"kubernetes.io/projected/be869517-a530-472f-b11e-59558aafc066-kube-api-access-5m4ck\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.534238 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.651387 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.652820 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.656929 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-82kl9" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.657199 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.657380 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.665733 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.697744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kolla-config\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.697819 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-config-data\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.697898 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-memcached-tls-certs\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.697926 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7sl\" (UniqueName: \"kubernetes.io/projected/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kube-api-access-zx7sl\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.698002 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-combined-ca-bundle\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.699278 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.800007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-combined-ca-bundle\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.800082 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kolla-config\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.800103 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-config-data\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.800434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-memcached-tls-certs\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.800479 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx7sl\" (UniqueName: \"kubernetes.io/projected/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kube-api-access-zx7sl\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.802253 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kolla-config\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.805516 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-config-data\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.808202 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-combined-ca-bundle\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.815414 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-memcached-tls-certs\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:40 crc kubenswrapper[4982]: I0123 08:14:40.823560 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx7sl\" (UniqueName: \"kubernetes.io/projected/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kube-api-access-zx7sl\") pod \"memcached-0\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " pod="openstack/memcached-0" Jan 23 08:14:41 crc kubenswrapper[4982]: I0123 08:14:41.013684 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 08:14:41 crc kubenswrapper[4982]: I0123 08:14:41.630590 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 08:14:41 crc kubenswrapper[4982]: I0123 08:14:41.912043 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 08:14:42 crc kubenswrapper[4982]: I0123 08:14:42.530356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"be869517-a530-472f-b11e-59558aafc066","Type":"ContainerStarted","Data":"5498c06a56d02bf6b6b042242193dffd02d56d85dd54376723354b7be2bbd132"} Jan 23 08:14:42 crc kubenswrapper[4982]: I0123 08:14:42.537968 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16","Type":"ContainerStarted","Data":"e2009cbf4ea4a3d1c3266e18593c6f43e6ae37eebc7b37e2fc039178b62bce04"} Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.262280 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.263435 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.278853 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-z7rpr" Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.280425 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.385856 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbn2\" (UniqueName: \"kubernetes.io/projected/c04ec7cf-944b-4279-8426-3485a93d6bae-kube-api-access-lgbn2\") pod \"kube-state-metrics-0\" (UID: \"c04ec7cf-944b-4279-8426-3485a93d6bae\") " pod="openstack/kube-state-metrics-0" Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.488141 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgbn2\" (UniqueName: \"kubernetes.io/projected/c04ec7cf-944b-4279-8426-3485a93d6bae-kube-api-access-lgbn2\") pod \"kube-state-metrics-0\" (UID: \"c04ec7cf-944b-4279-8426-3485a93d6bae\") " pod="openstack/kube-state-metrics-0" Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.524262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgbn2\" (UniqueName: \"kubernetes.io/projected/c04ec7cf-944b-4279-8426-3485a93d6bae-kube-api-access-lgbn2\") pod \"kube-state-metrics-0\" (UID: \"c04ec7cf-944b-4279-8426-3485a93d6bae\") " pod="openstack/kube-state-metrics-0" Jan 23 08:14:43 crc kubenswrapper[4982]: I0123 08:14:43.609203 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.469695 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qdkmz"] Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.474878 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.480104 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.483011 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.483282 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hwf7s" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.493857 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qdkmz"] Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.573583 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-ovn-controller-tls-certs\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.573715 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.573842 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcf4\" (UniqueName: \"kubernetes.io/projected/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-kube-api-access-xjcf4\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.573871 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run-ovn\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.573899 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-log-ovn\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.573948 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-combined-ca-bundle\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.573977 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-scripts\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.576275 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6ksf4"] Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.578149 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.590795 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6ksf4"] Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.675843 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-etc-ovs\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.675912 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-run\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.675997 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcf4\" (UniqueName: \"kubernetes.io/projected/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-kube-api-access-xjcf4\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676021 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run-ovn\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676052 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-log\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676070 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-log-ovn\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676098 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-combined-ca-bundle\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676122 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-lib\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676139 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-scripts\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676161 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-scripts\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676244 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27w4t\" (UniqueName: \"kubernetes.io/projected/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-kube-api-access-27w4t\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676272 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-ovn-controller-tls-certs\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676305 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.676812 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.677018 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run-ovn\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.677192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-log-ovn\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.678793 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-scripts\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.685125 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-combined-ca-bundle\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.711551 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-ovn-controller-tls-certs\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.723325 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcf4\" (UniqueName: \"kubernetes.io/projected/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-kube-api-access-xjcf4\") pod \"ovn-controller-qdkmz\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781262 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-etc-ovs\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-run\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781559 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-log\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781661 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-lib\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781611 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-etc-ovs\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-scripts\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781766 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-run\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.781999 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-log\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.782214 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27w4t\" (UniqueName: \"kubernetes.io/projected/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-kube-api-access-27w4t\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.783047 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-lib\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.784054 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-scripts\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.816742 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27w4t\" (UniqueName: \"kubernetes.io/projected/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-kube-api-access-27w4t\") pod \"ovn-controller-ovs-6ksf4\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.839432 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz" Jan 23 08:14:46 crc kubenswrapper[4982]: I0123 08:14:46.911013 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.565477 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.567901 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.575097 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.575340 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.575495 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.575870 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.576933 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-d4jdq" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.598892 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624111 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624164 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624191 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2ng\" (UniqueName: \"kubernetes.io/projected/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-kube-api-access-vt2ng\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624217 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624456 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624528 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624559 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.624703 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-config\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.725909 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.725955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.725982 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2ng\" (UniqueName: \"kubernetes.io/projected/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-kube-api-access-vt2ng\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.726007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.726051 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.726073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.726097 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.726129 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-config\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.729101 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.729502 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.730586 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-config\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.731409 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.735593 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.738740 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.745965 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.746261 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2ng\" (UniqueName: \"kubernetes.io/projected/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-kube-api-access-vt2ng\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.762523 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:48 crc kubenswrapper[4982]: I0123 08:14:48.899181 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.292423 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.294539 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.299079 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.299664 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.300262 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.300323 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mhdtb" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.301730 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.456760 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.457178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.457277 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.457316 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.457347 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.457374 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.457668 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-config\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.457719 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5j9m\" (UniqueName: \"kubernetes.io/projected/07620928-c400-47dd-b76d-25f31c3c1e0e-kube-api-access-k5j9m\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.558833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.558944 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.558972 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.559049 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-config\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.559067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5j9m\" (UniqueName: \"kubernetes.io/projected/07620928-c400-47dd-b76d-25f31c3c1e0e-kube-api-access-k5j9m\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.559137 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.559159 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.559191 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.559412 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.559426 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.560055 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-config\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.560427 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.563674 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.564204 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.566040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.585059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5j9m\" (UniqueName: \"kubernetes.io/projected/07620928-c400-47dd-b76d-25f31c3c1e0e-kube-api-access-k5j9m\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.586335 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " pod="openstack/ovsdbserver-sb-0" Jan 23 08:14:50 crc kubenswrapper[4982]: I0123 08:14:50.628224 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.141886 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4"] Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.143697 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.145915 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.147046 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.157164 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4"] Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.196026 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpf85\" (UniqueName: \"kubernetes.io/projected/80da01e2-2367-4834-9237-e25fe95a3728-kube-api-access-cpf85\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.196154 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80da01e2-2367-4834-9237-e25fe95a3728-config-volume\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.196227 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80da01e2-2367-4834-9237-e25fe95a3728-secret-volume\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.298196 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpf85\" (UniqueName: \"kubernetes.io/projected/80da01e2-2367-4834-9237-e25fe95a3728-kube-api-access-cpf85\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.298332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80da01e2-2367-4834-9237-e25fe95a3728-config-volume\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.298376 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80da01e2-2367-4834-9237-e25fe95a3728-secret-volume\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.301520 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80da01e2-2367-4834-9237-e25fe95a3728-config-volume\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.318136 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80da01e2-2367-4834-9237-e25fe95a3728-secret-volume\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.324882 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpf85\" (UniqueName: \"kubernetes.io/projected/80da01e2-2367-4834-9237-e25fe95a3728-kube-api-access-cpf85\") pod \"collect-profiles-29485935-9t9m4\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:00 crc kubenswrapper[4982]: I0123 08:15:00.506646 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:02 crc kubenswrapper[4982]: E0123 08:15:02.986668 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 23 08:15:02 crc kubenswrapper[4982]: E0123 08:15:02.987489 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcw2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(0b4b8957-4a36-4d8b-a591-fdcbe696c808): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:02 crc kubenswrapper[4982]: E0123 08:15:02.987583 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 23 08:15:02 crc kubenswrapper[4982]: E0123 08:15:02.987974 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lfmrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(26d23dde-8aac-481b-bfdb-ce3315166bc4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:02 crc kubenswrapper[4982]: E0123 08:15:02.988703 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" Jan 23 08:15:02 crc kubenswrapper[4982]: E0123 08:15:02.989824 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" Jan 23 08:15:03 crc kubenswrapper[4982]: E0123 08:15:03.781973 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" Jan 23 08:15:03 crc kubenswrapper[4982]: E0123 08:15:03.782475 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" Jan 23 08:15:05 crc kubenswrapper[4982]: E0123 08:15:05.174734 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 23 08:15:05 crc kubenswrapper[4982]: E0123 08:15:05.174963 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5m4ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(be869517-a530-472f-b11e-59558aafc066): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:05 crc kubenswrapper[4982]: E0123 08:15:05.176389 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="be869517-a530-472f-b11e-59558aafc066" Jan 23 08:15:05 crc kubenswrapper[4982]: E0123 08:15:05.802065 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="be869517-a530-472f-b11e-59558aafc066" Jan 23 08:15:05 crc kubenswrapper[4982]: E0123 08:15:05.843780 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 23 08:15:05 crc kubenswrapper[4982]: E0123 08:15:05.843983 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n64fh598h688hch687hb6h5c4h6h645hbh58dh5b7h5fch687h5b8hch79h5d5hcch647h6fh5bch65fh5c9hdfhb4h85h579h576h5f8hf7h67dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zx7sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:05 crc kubenswrapper[4982]: E0123 08:15:05.845375 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" Jan 23 08:15:06 crc kubenswrapper[4982]: E0123 08:15:06.810307 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" Jan 23 08:15:11 crc kubenswrapper[4982]: I0123 08:15:11.435836 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:15:11 crc kubenswrapper[4982]: I0123 08:15:11.435938 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:15:11 crc kubenswrapper[4982]: E0123 08:15:11.476548 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 23 08:15:11 crc kubenswrapper[4982]: E0123 08:15:11.476760 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2z7c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(4f4741be-bf98-457a-9275-45405ac3f257): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:11 crc kubenswrapper[4982]: E0123 08:15:11.477951 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="4f4741be-bf98-457a-9275-45405ac3f257" Jan 23 08:15:11 crc kubenswrapper[4982]: E0123 08:15:11.871834 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="4f4741be-bf98-457a-9275-45405ac3f257" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.668549 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.668803 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2jpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-tvbwr_openstack(fb2a8db8-862c-4550-a240-6b3a0fb03f5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.669972 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" podUID="fb2a8db8-862c-4550-a240-6b3a0fb03f5a" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.709609 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.710351 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmnkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-v6xt9_openstack(1d4dfa64-9515-462d-a0dc-e9a61b95e022): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.711878 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.729805 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.730000 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bsknm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-jpxpt_openstack(73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.731683 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" podUID="73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.787797 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.788007 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzv4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-qv9kx_openstack(05fbf7b6-b095-4e1b-b23c-e5c937ef5af0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.790725 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.909080 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" Jan 23 08:15:12 crc kubenswrapper[4982]: E0123 08:15:12.915263 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.452940 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.460023 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:15:13 crc kubenswrapper[4982]: W0123 08:15:13.530926 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10c1d9ff_78da_4916_b01b_e4eb2935c4c3.slice/crio-5b6443d25d7b17ed816c3bd2b5a031639f52caf117cd1730e462a73088a7ecaa WatchSource:0}: Error finding container 5b6443d25d7b17ed816c3bd2b5a031639f52caf117cd1730e462a73088a7ecaa: Status 404 returned error can't find the container with id 5b6443d25d7b17ed816c3bd2b5a031639f52caf117cd1730e462a73088a7ecaa Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.532373 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qdkmz"] Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.573552 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsknm\" (UniqueName: \"kubernetes.io/projected/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-kube-api-access-bsknm\") pod \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.573812 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2jpg\" (UniqueName: \"kubernetes.io/projected/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-kube-api-access-t2jpg\") pod \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.573886 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-dns-svc\") pod \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.574056 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-config\") pod \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\" (UID: \"fb2a8db8-862c-4550-a240-6b3a0fb03f5a\") " Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.574167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-config\") pod \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\" (UID: \"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf\") " Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.574679 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-config" (OuterVolumeSpecName: "config") pod "fb2a8db8-862c-4550-a240-6b3a0fb03f5a" (UID: "fb2a8db8-862c-4550-a240-6b3a0fb03f5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.574848 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-config" (OuterVolumeSpecName: "config") pod "73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf" (UID: "73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.574934 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.575438 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf" (UID: "73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.580574 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-kube-api-access-t2jpg" (OuterVolumeSpecName: "kube-api-access-t2jpg") pod "fb2a8db8-862c-4550-a240-6b3a0fb03f5a" (UID: "fb2a8db8-862c-4550-a240-6b3a0fb03f5a"). InnerVolumeSpecName "kube-api-access-t2jpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.580927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-kube-api-access-bsknm" (OuterVolumeSpecName: "kube-api-access-bsknm") pod "73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf" (UID: "73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf"). InnerVolumeSpecName "kube-api-access-bsknm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.679053 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2jpg\" (UniqueName: \"kubernetes.io/projected/fb2a8db8-862c-4550-a240-6b3a0fb03f5a-kube-api-access-t2jpg\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.679100 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.679113 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.679126 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsknm\" (UniqueName: \"kubernetes.io/projected/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf-kube-api-access-bsknm\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.717994 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.729009 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4"] Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.912072 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" event={"ID":"73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf","Type":"ContainerDied","Data":"01ec48c1682bcb55914936c58df9d1c93742e558a94aeae3c52818df25372766"} Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.913994 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-jpxpt" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.918207 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c04ec7cf-944b-4279-8426-3485a93d6bae","Type":"ContainerStarted","Data":"c509fb4ea4d7762b67cba84292de1438d0ad2c18c79e6d11e62f31dafc775c61"} Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.921588 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.921664 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz" event={"ID":"10c1d9ff-78da-4916-b01b-e4eb2935c4c3","Type":"ContainerStarted","Data":"5b6443d25d7b17ed816c3bd2b5a031639f52caf117cd1730e462a73088a7ecaa"} Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.935738 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" event={"ID":"80da01e2-2367-4834-9237-e25fe95a3728","Type":"ContainerStarted","Data":"9c5369ce38e8ce1617ab545e2d66231156ac592fe4b1d64d64ea8994b867c751"} Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.944541 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" Jan 23 08:15:13 crc kubenswrapper[4982]: I0123 08:15:13.944385 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-tvbwr" event={"ID":"fb2a8db8-862c-4550-a240-6b3a0fb03f5a","Type":"ContainerDied","Data":"557a461974a9a3735f2a2e0ae4c32262d62f44368902283b3074db87073d02f6"} Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.003770 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jpxpt"] Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.029031 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-jpxpt"] Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.058002 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-tvbwr"] Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.073599 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-tvbwr"] Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.483030 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 08:15:14 crc kubenswrapper[4982]: W0123 08:15:14.535570 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07620928_c400_47dd_b76d_25f31c3c1e0e.slice/crio-a2f6674c26428112bac1ecc2c1cb34b7d666361ef915c832783348815ec10f52 WatchSource:0}: Error finding container a2f6674c26428112bac1ecc2c1cb34b7d666361ef915c832783348815ec10f52: Status 404 returned error can't find the container with id a2f6674c26428112bac1ecc2c1cb34b7d666361ef915c832783348815ec10f52 Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.951771 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6ksf4"] Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.953838 4982 generic.go:334] "Generic (PLEG): container finished" podID="80da01e2-2367-4834-9237-e25fe95a3728" containerID="ba70dd6792cc52f19b94f37785e5764d467316fb10b7d8334d3fa8a2c1a924d9" exitCode=0 Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.953900 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" event={"ID":"80da01e2-2367-4834-9237-e25fe95a3728","Type":"ContainerDied","Data":"ba70dd6792cc52f19b94f37785e5764d467316fb10b7d8334d3fa8a2c1a924d9"} Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.955606 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"07620928-c400-47dd-b76d-25f31c3c1e0e","Type":"ContainerStarted","Data":"a2f6674c26428112bac1ecc2c1cb34b7d666361ef915c832783348815ec10f52"} Jan 23 08:15:14 crc kubenswrapper[4982]: I0123 08:15:14.958092 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9","Type":"ContainerStarted","Data":"b45b6caf4606e9dc21976de349eaea574e1ab02f3b6d07e733659b2b61b2a564"} Jan 23 08:15:15 crc kubenswrapper[4982]: I0123 08:15:15.845161 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf" path="/var/lib/kubelet/pods/73f39c4b-d9c0-45e4-ab2c-c5b36c4189bf/volumes" Jan 23 08:15:15 crc kubenswrapper[4982]: I0123 08:15:15.845655 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb2a8db8-862c-4550-a240-6b3a0fb03f5a" path="/var/lib/kubelet/pods/fb2a8db8-862c-4550-a240-6b3a0fb03f5a/volumes" Jan 23 08:15:15 crc kubenswrapper[4982]: I0123 08:15:15.967951 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerStarted","Data":"f3386b3e60a8fe6d15a9130629fb69a8a35ab4085c054af6bee7be2a7402fd54"} Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.297537 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.392163 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80da01e2-2367-4834-9237-e25fe95a3728-secret-volume\") pod \"80da01e2-2367-4834-9237-e25fe95a3728\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.392744 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80da01e2-2367-4834-9237-e25fe95a3728-config-volume\") pod \"80da01e2-2367-4834-9237-e25fe95a3728\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.392821 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpf85\" (UniqueName: \"kubernetes.io/projected/80da01e2-2367-4834-9237-e25fe95a3728-kube-api-access-cpf85\") pod \"80da01e2-2367-4834-9237-e25fe95a3728\" (UID: \"80da01e2-2367-4834-9237-e25fe95a3728\") " Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.393371 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80da01e2-2367-4834-9237-e25fe95a3728-config-volume" (OuterVolumeSpecName: "config-volume") pod "80da01e2-2367-4834-9237-e25fe95a3728" (UID: "80da01e2-2367-4834-9237-e25fe95a3728"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.394146 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80da01e2-2367-4834-9237-e25fe95a3728-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.401491 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80da01e2-2367-4834-9237-e25fe95a3728-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "80da01e2-2367-4834-9237-e25fe95a3728" (UID: "80da01e2-2367-4834-9237-e25fe95a3728"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.403821 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80da01e2-2367-4834-9237-e25fe95a3728-kube-api-access-cpf85" (OuterVolumeSpecName: "kube-api-access-cpf85") pod "80da01e2-2367-4834-9237-e25fe95a3728" (UID: "80da01e2-2367-4834-9237-e25fe95a3728"). InnerVolumeSpecName "kube-api-access-cpf85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.495707 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80da01e2-2367-4834-9237-e25fe95a3728-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.495741 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpf85\" (UniqueName: \"kubernetes.io/projected/80da01e2-2367-4834-9237-e25fe95a3728-kube-api-access-cpf85\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.997270 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" event={"ID":"80da01e2-2367-4834-9237-e25fe95a3728","Type":"ContainerDied","Data":"9c5369ce38e8ce1617ab545e2d66231156ac592fe4b1d64d64ea8994b867c751"} Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.997345 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c5369ce38e8ce1617ab545e2d66231156ac592fe4b1d64d64ea8994b867c751" Jan 23 08:15:17 crc kubenswrapper[4982]: I0123 08:15:17.997352 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4" Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.028047 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerStarted","Data":"876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855"} Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.030770 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c04ec7cf-944b-4279-8426-3485a93d6bae","Type":"ContainerStarted","Data":"92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98"} Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.030880 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.032787 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz" event={"ID":"10c1d9ff-78da-4916-b01b-e4eb2935c4c3","Type":"ContainerStarted","Data":"fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e"} Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.032902 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qdkmz" Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.034692 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16","Type":"ContainerStarted","Data":"780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e"} Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.034908 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.036453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"07620928-c400-47dd-b76d-25f31c3c1e0e","Type":"ContainerStarted","Data":"3cd9a32a141012cf4bba8863a14bb8c7f3122d23d541a734493f06ad3ca70f53"} Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.037963 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"be869517-a530-472f-b11e-59558aafc066","Type":"ContainerStarted","Data":"857021a4975f214c420aba4c2fef9f900f3e0c5a5cc647383b732a391cb0991e"} Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.039720 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9","Type":"ContainerStarted","Data":"5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515"} Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.086379 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qdkmz" podStartSLOduration=29.604601695 podStartE2EDuration="35.086357508s" podCreationTimestamp="2026-01-23 08:14:46 +0000 UTC" firstStartedPulling="2026-01-23 08:15:13.536059789 +0000 UTC m=+1188.016423175" lastFinishedPulling="2026-01-23 08:15:19.017815592 +0000 UTC m=+1193.498178988" observedRunningTime="2026-01-23 08:15:21.080604196 +0000 UTC m=+1195.560967582" watchObservedRunningTime="2026-01-23 08:15:21.086357508 +0000 UTC m=+1195.566720904" Jan 23 08:15:21 crc kubenswrapper[4982]: I0123 08:15:21.117579 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=32.098917467 podStartE2EDuration="38.117557066s" podCreationTimestamp="2026-01-23 08:14:43 +0000 UTC" firstStartedPulling="2026-01-23 08:15:13.745203554 +0000 UTC m=+1188.225566940" lastFinishedPulling="2026-01-23 08:15:19.763843163 +0000 UTC m=+1194.244206539" observedRunningTime="2026-01-23 08:15:21.109528043 +0000 UTC m=+1195.589891429" watchObservedRunningTime="2026-01-23 08:15:21.117557066 +0000 UTC m=+1195.597920452" Jan 23 08:15:22 crc kubenswrapper[4982]: I0123 08:15:22.061476 4982 generic.go:334] "Generic (PLEG): container finished" podID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerID="876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855" exitCode=0 Jan 23 08:15:22 crc kubenswrapper[4982]: I0123 08:15:22.061550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerDied","Data":"876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855"} Jan 23 08:15:22 crc kubenswrapper[4982]: I0123 08:15:22.064289 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b4b8957-4a36-4d8b-a591-fdcbe696c808","Type":"ContainerStarted","Data":"0d2e98831fddc20f7bb33d42b529158a780bb8345579dad163a63f455e992e0d"} Jan 23 08:15:22 crc kubenswrapper[4982]: I0123 08:15:22.067299 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26d23dde-8aac-481b-bfdb-ce3315166bc4","Type":"ContainerStarted","Data":"ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef"} Jan 23 08:15:22 crc kubenswrapper[4982]: I0123 08:15:22.093512 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.268544309 podStartE2EDuration="42.093488121s" podCreationTimestamp="2026-01-23 08:14:40 +0000 UTC" firstStartedPulling="2026-01-23 08:14:41.939412344 +0000 UTC m=+1156.419775730" lastFinishedPulling="2026-01-23 08:15:19.764356156 +0000 UTC m=+1194.244719542" observedRunningTime="2026-01-23 08:15:21.14111562 +0000 UTC m=+1195.621479006" watchObservedRunningTime="2026-01-23 08:15:22.093488121 +0000 UTC m=+1196.573851507" Jan 23 08:15:23 crc kubenswrapper[4982]: I0123 08:15:23.082163 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerStarted","Data":"52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563"} Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.103572 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9","Type":"ContainerStarted","Data":"2fd889b537634e373cd79c15112db577cb478310dae8eb3f23acd8930c050163"} Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.107087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerStarted","Data":"3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713"} Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.107296 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.108480 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"07620928-c400-47dd-b76d-25f31c3c1e0e","Type":"ContainerStarted","Data":"61f6f16773111fc5cd5315ce04a7454bb22686d399b1293d696bebdc325184ec"} Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.110002 4982 generic.go:334] "Generic (PLEG): container finished" podID="be869517-a530-472f-b11e-59558aafc066" containerID="857021a4975f214c420aba4c2fef9f900f3e0c5a5cc647383b732a391cb0991e" exitCode=0 Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.110050 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"be869517-a530-472f-b11e-59558aafc066","Type":"ContainerDied","Data":"857021a4975f214c420aba4c2fef9f900f3e0c5a5cc647383b732a391cb0991e"} Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.130987 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.743183054 podStartE2EDuration="38.130948497s" podCreationTimestamp="2026-01-23 08:14:47 +0000 UTC" firstStartedPulling="2026-01-23 08:15:13.935452908 +0000 UTC m=+1188.415816294" lastFinishedPulling="2026-01-23 08:15:24.323218351 +0000 UTC m=+1198.803581737" observedRunningTime="2026-01-23 08:15:25.128695868 +0000 UTC m=+1199.609059284" watchObservedRunningTime="2026-01-23 08:15:25.130948497 +0000 UTC m=+1199.611311883" Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.228950 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6ksf4" podStartSLOduration=34.943398648 podStartE2EDuration="39.228928935s" podCreationTimestamp="2026-01-23 08:14:46 +0000 UTC" firstStartedPulling="2026-01-23 08:15:15.173519094 +0000 UTC m=+1189.653882480" lastFinishedPulling="2026-01-23 08:15:19.459049381 +0000 UTC m=+1193.939412767" observedRunningTime="2026-01-23 08:15:25.212084469 +0000 UTC m=+1199.692447855" watchObservedRunningTime="2026-01-23 08:15:25.228928935 +0000 UTC m=+1199.709292321" Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.274255 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.476299453 podStartE2EDuration="36.274234066s" podCreationTimestamp="2026-01-23 08:14:49 +0000 UTC" firstStartedPulling="2026-01-23 08:15:14.537999045 +0000 UTC m=+1189.018362431" lastFinishedPulling="2026-01-23 08:15:24.335933658 +0000 UTC m=+1198.816297044" observedRunningTime="2026-01-23 08:15:25.243107411 +0000 UTC m=+1199.723470797" watchObservedRunningTime="2026-01-23 08:15:25.274234066 +0000 UTC m=+1199.754597452" Jan 23 08:15:25 crc kubenswrapper[4982]: I0123 08:15:25.629309 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 08:15:26 crc kubenswrapper[4982]: I0123 08:15:26.015786 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 08:15:26 crc kubenswrapper[4982]: I0123 08:15:26.123614 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"be869517-a530-472f-b11e-59558aafc066","Type":"ContainerStarted","Data":"881b906ef1be3ae613692e976ccc55331837c84c9820bb020dc06fe11f437493"} Jan 23 08:15:26 crc kubenswrapper[4982]: I0123 08:15:26.134661 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4f4741be-bf98-457a-9275-45405ac3f257","Type":"ContainerStarted","Data":"6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2"} Jan 23 08:15:26 crc kubenswrapper[4982]: I0123 08:15:26.135104 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:15:26 crc kubenswrapper[4982]: I0123 08:15:26.165104 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.30109534 podStartE2EDuration="47.165077005s" podCreationTimestamp="2026-01-23 08:14:39 +0000 UTC" firstStartedPulling="2026-01-23 08:14:41.692722825 +0000 UTC m=+1156.173086211" lastFinishedPulling="2026-01-23 08:15:19.55670449 +0000 UTC m=+1194.037067876" observedRunningTime="2026-01-23 08:15:26.148785703 +0000 UTC m=+1200.629149099" watchObservedRunningTime="2026-01-23 08:15:26.165077005 +0000 UTC m=+1200.645440401" Jan 23 08:15:26 crc kubenswrapper[4982]: I0123 08:15:26.628351 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 08:15:26 crc kubenswrapper[4982]: I0123 08:15:26.668013 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.144447 4982 generic.go:334] "Generic (PLEG): container finished" podID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerID="dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d" exitCode=0 Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.144558 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" event={"ID":"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0","Type":"ContainerDied","Data":"dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d"} Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.147534 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerID="3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963" exitCode=0 Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.147617 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" event={"ID":"1d4dfa64-9515-462d-a0dc-e9a61b95e022","Type":"ContainerDied","Data":"3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963"} Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.210322 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.563305 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-qv9kx"] Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.603100 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-94b6f"] Jan 23 08:15:27 crc kubenswrapper[4982]: E0123 08:15:27.603752 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80da01e2-2367-4834-9237-e25fe95a3728" containerName="collect-profiles" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.603770 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="80da01e2-2367-4834-9237-e25fe95a3728" containerName="collect-profiles" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.603983 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="80da01e2-2367-4834-9237-e25fe95a3728" containerName="collect-profiles" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.605002 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.607036 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.623964 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-94b6f"] Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.699789 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jvgnd"] Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.700931 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.703206 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.709933 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.710164 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-config\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.710308 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4psx\" (UniqueName: \"kubernetes.io/projected/f0c15943-2175-4854-bb72-ebdbc6833743-kube-api-access-v4psx\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.710459 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-dns-svc\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.725283 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jvgnd"] Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.812313 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4psx\" (UniqueName: \"kubernetes.io/projected/f0c15943-2175-4854-bb72-ebdbc6833743-kube-api-access-v4psx\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.812656 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovn-rundir\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.812805 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-combined-ca-bundle\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.812936 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75glv\" (UniqueName: \"kubernetes.io/projected/95b0abeb-729f-49a9-8d9f-d931ec9f0814-kube-api-access-75glv\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.813053 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-dns-svc\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.813165 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.813318 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.813418 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovs-rundir\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.813524 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-config\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.813653 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b0abeb-729f-49a9-8d9f-d931ec9f0814-config\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.814996 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-ovsdbserver-sb\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.814996 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-dns-svc\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.815340 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-config\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.834238 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4psx\" (UniqueName: \"kubernetes.io/projected/f0c15943-2175-4854-bb72-ebdbc6833743-kube-api-access-v4psx\") pod \"dnsmasq-dns-5b79764b65-94b6f\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.899973 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.916088 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovs-rundir\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.916179 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b0abeb-729f-49a9-8d9f-d931ec9f0814-config\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.916250 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovn-rundir\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.916279 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-combined-ca-bundle\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.916321 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75glv\" (UniqueName: \"kubernetes.io/projected/95b0abeb-729f-49a9-8d9f-d931ec9f0814-kube-api-access-75glv\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.916377 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.916563 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovn-rundir\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.917437 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b0abeb-729f-49a9-8d9f-d931ec9f0814-config\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.918328 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovs-rundir\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.920045 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.927127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.929606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-combined-ca-bundle\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.937371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75glv\" (UniqueName: \"kubernetes.io/projected/95b0abeb-729f-49a9-8d9f-d931ec9f0814-kube-api-access-75glv\") pod \"ovn-controller-metrics-jvgnd\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:27 crc kubenswrapper[4982]: I0123 08:15:27.947394 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.026428 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.162886 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-v6xt9"] Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.172227 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" event={"ID":"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0","Type":"ContainerStarted","Data":"bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad"} Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.172644 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerName="dnsmasq-dns" containerID="cri-o://bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad" gracePeriod=10 Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.172898 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.189465 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerName="dnsmasq-dns" containerID="cri-o://edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d" gracePeriod=10 Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.189549 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" event={"ID":"1d4dfa64-9515-462d-a0dc-e9a61b95e022","Type":"ContainerStarted","Data":"edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d"} Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.189833 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.190087 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.221608 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" podStartSLOduration=4.072746436 podStartE2EDuration="52.221587643s" podCreationTimestamp="2026-01-23 08:14:36 +0000 UTC" firstStartedPulling="2026-01-23 08:14:37.933901325 +0000 UTC m=+1152.414264711" lastFinishedPulling="2026-01-23 08:15:26.082742532 +0000 UTC m=+1200.563105918" observedRunningTime="2026-01-23 08:15:28.203855013 +0000 UTC m=+1202.684218409" watchObservedRunningTime="2026-01-23 08:15:28.221587643 +0000 UTC m=+1202.701951029" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.266301 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-m2lcw"] Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.278690 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.288768 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.322346 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" podStartSLOduration=3.799080781 podStartE2EDuration="52.322327034s" podCreationTimestamp="2026-01-23 08:14:36 +0000 UTC" firstStartedPulling="2026-01-23 08:14:37.672812784 +0000 UTC m=+1152.153176170" lastFinishedPulling="2026-01-23 08:15:26.196059037 +0000 UTC m=+1200.676422423" observedRunningTime="2026-01-23 08:15:28.277618758 +0000 UTC m=+1202.757982164" watchObservedRunningTime="2026-01-23 08:15:28.322327034 +0000 UTC m=+1202.802690420" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.395017 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-m2lcw"] Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.438727 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.438814 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.438865 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-dns-svc\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.438915 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drmxn\" (UniqueName: \"kubernetes.io/projected/7b41f0a6-0b83-4384-b3d2-4b26942f7152-kube-api-access-drmxn\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.439001 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-config\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.485569 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.539602 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.539684 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-dns-svc\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.539739 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drmxn\" (UniqueName: \"kubernetes.io/projected/7b41f0a6-0b83-4384-b3d2-4b26942f7152-kube-api-access-drmxn\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.539829 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-config\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.539872 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.549889 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.551305 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.551753 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-dns-svc\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.556009 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-config\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.578230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drmxn\" (UniqueName: \"kubernetes.io/projected/7b41f0a6-0b83-4384-b3d2-4b26942f7152-kube-api-access-drmxn\") pod \"dnsmasq-dns-586b989cdc-m2lcw\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.701982 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.765685 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.767258 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.770331 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.770653 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.770824 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.777433 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-r2zsx" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.783827 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.846241 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.846398 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-config\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.846427 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.846446 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.846572 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.846616 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-scripts\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.846795 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwm4t\" (UniqueName: \"kubernetes.io/projected/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-kube-api-access-zwm4t\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.849991 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-94b6f"] Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.948163 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwm4t\" (UniqueName: \"kubernetes.io/projected/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-kube-api-access-zwm4t\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.948574 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.948682 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-config\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.948708 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.948731 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.948779 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.948809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-scripts\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.949680 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-scripts\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.951255 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.953747 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-config\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.965404 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.966247 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.967421 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwm4t\" (UniqueName: \"kubernetes.io/projected/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-kube-api-access-zwm4t\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.969821 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " pod="openstack/ovn-northd-0" Jan 23 08:15:28 crc kubenswrapper[4982]: W0123 08:15:28.984770 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95b0abeb_729f_49a9_8d9f_d931ec9f0814.slice/crio-c6b6c69eac048ad07ae465b119d56644b1b635f2eb06935f69539c1f83e0c20c WatchSource:0}: Error finding container c6b6c69eac048ad07ae465b119d56644b1b635f2eb06935f69539c1f83e0c20c: Status 404 returned error can't find the container with id c6b6c69eac048ad07ae465b119d56644b1b635f2eb06935f69539c1f83e0c20c Jan 23 08:15:28 crc kubenswrapper[4982]: I0123 08:15:28.989929 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jvgnd"] Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.093744 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.122470 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.131530 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.217405 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerID="edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d" exitCode=0 Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.217573 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" event={"ID":"1d4dfa64-9515-462d-a0dc-e9a61b95e022","Type":"ContainerDied","Data":"edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d"} Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.217712 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.217830 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-v6xt9" event={"ID":"1d4dfa64-9515-462d-a0dc-e9a61b95e022","Type":"ContainerDied","Data":"4bc94ba765678d55cad432371cf10aea237c9bdaf276d57af27e8f4d8afef371"} Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.217865 4982 scope.go:117] "RemoveContainer" containerID="edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.223961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jvgnd" event={"ID":"95b0abeb-729f-49a9-8d9f-d931ec9f0814","Type":"ContainerStarted","Data":"c6b6c69eac048ad07ae465b119d56644b1b635f2eb06935f69539c1f83e0c20c"} Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.230701 4982 generic.go:334] "Generic (PLEG): container finished" podID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerID="bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad" exitCode=0 Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.230795 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" event={"ID":"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0","Type":"ContainerDied","Data":"bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad"} Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.230834 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" event={"ID":"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0","Type":"ContainerDied","Data":"16c24ae706c9a3c9d2693301483ee9689a581bbeeec35fef233af8920ae17e04"} Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.230925 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-qv9kx" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.235204 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" event={"ID":"f0c15943-2175-4854-bb72-ebdbc6833743","Type":"ContainerStarted","Data":"07c4c19baa588690f57550700ead96de9023c252c95de2e299d88cc24b185f14"} Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.245050 4982 scope.go:117] "RemoveContainer" containerID="3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.253337 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmnkb\" (UniqueName: \"kubernetes.io/projected/1d4dfa64-9515-462d-a0dc-e9a61b95e022-kube-api-access-wmnkb\") pod \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.253421 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzv4k\" (UniqueName: \"kubernetes.io/projected/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-kube-api-access-xzv4k\") pod \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.253498 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-config\") pod \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.253544 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-config\") pod \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.253673 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-dns-svc\") pod \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\" (UID: \"05fbf7b6-b095-4e1b-b23c-e5c937ef5af0\") " Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.253846 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-dns-svc\") pod \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\" (UID: \"1d4dfa64-9515-462d-a0dc-e9a61b95e022\") " Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.257401 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d4dfa64-9515-462d-a0dc-e9a61b95e022-kube-api-access-wmnkb" (OuterVolumeSpecName: "kube-api-access-wmnkb") pod "1d4dfa64-9515-462d-a0dc-e9a61b95e022" (UID: "1d4dfa64-9515-462d-a0dc-e9a61b95e022"). InnerVolumeSpecName "kube-api-access-wmnkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.260801 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-kube-api-access-xzv4k" (OuterVolumeSpecName: "kube-api-access-xzv4k") pod "05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" (UID: "05fbf7b6-b095-4e1b-b23c-e5c937ef5af0"). InnerVolumeSpecName "kube-api-access-xzv4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.291856 4982 scope.go:117] "RemoveContainer" containerID="edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d" Jan 23 08:15:29 crc kubenswrapper[4982]: E0123 08:15:29.292885 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d\": container with ID starting with edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d not found: ID does not exist" containerID="edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.292919 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d"} err="failed to get container status \"edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d\": rpc error: code = NotFound desc = could not find container \"edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d\": container with ID starting with edb9da97804735443501cd4081a1f263c35d2532b2e3032e4b74c5c18665f09d not found: ID does not exist" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.292939 4982 scope.go:117] "RemoveContainer" containerID="3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963" Jan 23 08:15:29 crc kubenswrapper[4982]: E0123 08:15:29.293963 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963\": container with ID starting with 3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963 not found: ID does not exist" containerID="3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.293986 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963"} err="failed to get container status \"3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963\": rpc error: code = NotFound desc = could not find container \"3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963\": container with ID starting with 3623213385cb408b6ca962a741599632e01a4cc00e185baf1632655f57e2a963 not found: ID does not exist" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.294005 4982 scope.go:117] "RemoveContainer" containerID="bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.308414 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-config" (OuterVolumeSpecName: "config") pod "05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" (UID: "05fbf7b6-b095-4e1b-b23c-e5c937ef5af0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.316417 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d4dfa64-9515-462d-a0dc-e9a61b95e022" (UID: "1d4dfa64-9515-462d-a0dc-e9a61b95e022"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.319164 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-config" (OuterVolumeSpecName: "config") pod "1d4dfa64-9515-462d-a0dc-e9a61b95e022" (UID: "1d4dfa64-9515-462d-a0dc-e9a61b95e022"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.326747 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-m2lcw"] Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.334438 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" (UID: "05fbf7b6-b095-4e1b-b23c-e5c937ef5af0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.350191 4982 scope.go:117] "RemoveContainer" containerID="dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.356474 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.356499 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.356509 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmnkb\" (UniqueName: \"kubernetes.io/projected/1d4dfa64-9515-462d-a0dc-e9a61b95e022-kube-api-access-wmnkb\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.356518 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzv4k\" (UniqueName: \"kubernetes.io/projected/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-kube-api-access-xzv4k\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.356528 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.356536 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4dfa64-9515-462d-a0dc-e9a61b95e022-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.384011 4982 scope.go:117] "RemoveContainer" containerID="bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad" Jan 23 08:15:29 crc kubenswrapper[4982]: E0123 08:15:29.386674 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad\": container with ID starting with bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad not found: ID does not exist" containerID="bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.386788 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad"} err="failed to get container status \"bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad\": rpc error: code = NotFound desc = could not find container \"bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad\": container with ID starting with bb11eec76739bc932bc92ac0a9701468e725608e0d81ac9ac0a74cf0d08410ad not found: ID does not exist" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.386878 4982 scope.go:117] "RemoveContainer" containerID="dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d" Jan 23 08:15:29 crc kubenswrapper[4982]: E0123 08:15:29.387427 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d\": container with ID starting with dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d not found: ID does not exist" containerID="dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.387452 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d"} err="failed to get container status \"dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d\": rpc error: code = NotFound desc = could not find container \"dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d\": container with ID starting with dc878ce90a154e08dd250c2fa905c9e39c53bf8f01e9eeae71d084c350fc1d8d not found: ID does not exist" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.553842 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-v6xt9"] Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.563800 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-v6xt9"] Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.584825 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-qv9kx"] Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.594318 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-qv9kx"] Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.642473 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 08:15:29 crc kubenswrapper[4982]: W0123 08:15:29.677051 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e81c71a_ceaf_4e4a_b875_da5f9abcdd06.slice/crio-90a78cc77e8c663f939a2330de37f397f34a1200d83d5d74765b29e1d020d9f2 WatchSource:0}: Error finding container 90a78cc77e8c663f939a2330de37f397f34a1200d83d5d74765b29e1d020d9f2: Status 404 returned error can't find the container with id 90a78cc77e8c663f939a2330de37f397f34a1200d83d5d74765b29e1d020d9f2 Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.836999 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" path="/var/lib/kubelet/pods/05fbf7b6-b095-4e1b-b23c-e5c937ef5af0/volumes" Jan 23 08:15:29 crc kubenswrapper[4982]: I0123 08:15:29.837838 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" path="/var/lib/kubelet/pods/1d4dfa64-9515-462d-a0dc-e9a61b95e022/volumes" Jan 23 08:15:30 crc kubenswrapper[4982]: E0123 08:15:30.136816 4982 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.168:36920->38.129.56.168:33023: write tcp 38.129.56.168:36920->38.129.56.168:33023: write: broken pipe Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.244060 4982 generic.go:334] "Generic (PLEG): container finished" podID="f0c15943-2175-4854-bb72-ebdbc6833743" containerID="24b2d9e669bda05086462abeca72d8b1c195e586203509c1f6a637c2037dd062" exitCode=0 Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.244201 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" event={"ID":"f0c15943-2175-4854-bb72-ebdbc6833743","Type":"ContainerDied","Data":"24b2d9e669bda05086462abeca72d8b1c195e586203509c1f6a637c2037dd062"} Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.248536 4982 generic.go:334] "Generic (PLEG): container finished" podID="4f4741be-bf98-457a-9275-45405ac3f257" containerID="6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2" exitCode=0 Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.248622 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4f4741be-bf98-457a-9275-45405ac3f257","Type":"ContainerDied","Data":"6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2"} Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.250782 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jvgnd" event={"ID":"95b0abeb-729f-49a9-8d9f-d931ec9f0814","Type":"ContainerStarted","Data":"6d3c5732d58d4047ed886c95a750296b113ce1ccf59d5efffc3d6754ac5aa0bb"} Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.253099 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06","Type":"ContainerStarted","Data":"90a78cc77e8c663f939a2330de37f397f34a1200d83d5d74765b29e1d020d9f2"} Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.258317 4982 generic.go:334] "Generic (PLEG): container finished" podID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerID="bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab" exitCode=0 Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.258509 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" event={"ID":"7b41f0a6-0b83-4384-b3d2-4b26942f7152","Type":"ContainerDied","Data":"bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab"} Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.258556 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" event={"ID":"7b41f0a6-0b83-4384-b3d2-4b26942f7152","Type":"ContainerStarted","Data":"600a5c3e93dd6541297ebd2008c2cd4a8c0570ee72dfae3d6cf695be2c0415e0"} Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.361607 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jvgnd" podStartSLOduration=3.361578642 podStartE2EDuration="3.361578642s" podCreationTimestamp="2026-01-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:30.32000527 +0000 UTC m=+1204.800368676" watchObservedRunningTime="2026-01-23 08:15:30.361578642 +0000 UTC m=+1204.841942038" Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.667083 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 08:15:30 crc kubenswrapper[4982]: I0123 08:15:30.667162 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.269460 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" event={"ID":"7b41f0a6-0b83-4384-b3d2-4b26942f7152","Type":"ContainerStarted","Data":"9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285"} Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.269980 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.273147 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" event={"ID":"f0c15943-2175-4854-bb72-ebdbc6833743","Type":"ContainerStarted","Data":"d8d5274a44b413a7337658e8c8a1cdb93721aa9650720a829df365863e2285cd"} Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.273318 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.275342 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4f4741be-bf98-457a-9275-45405ac3f257","Type":"ContainerStarted","Data":"c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2"} Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.298064 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" podStartSLOduration=3.298040552 podStartE2EDuration="3.298040552s" podCreationTimestamp="2026-01-23 08:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:31.290276696 +0000 UTC m=+1205.770640082" watchObservedRunningTime="2026-01-23 08:15:31.298040552 +0000 UTC m=+1205.778403938" Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.325072 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371983.529722 podStartE2EDuration="53.325053018s" podCreationTimestamp="2026-01-23 08:14:38 +0000 UTC" firstStartedPulling="2026-01-23 08:14:40.034787888 +0000 UTC m=+1154.515151274" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:31.323002734 +0000 UTC m=+1205.803366130" watchObservedRunningTime="2026-01-23 08:15:31.325053018 +0000 UTC m=+1205.805416404" Jan 23 08:15:31 crc kubenswrapper[4982]: I0123 08:15:31.348674 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" podStartSLOduration=4.348655504 podStartE2EDuration="4.348655504s" podCreationTimestamp="2026-01-23 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:31.341811913 +0000 UTC m=+1205.822175309" watchObservedRunningTime="2026-01-23 08:15:31.348655504 +0000 UTC m=+1205.829018890" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.617338 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.746513 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-94b6f"] Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.747315 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" containerName="dnsmasq-dns" containerID="cri-o://d8d5274a44b413a7337658e8c8a1cdb93721aa9650720a829df365863e2285cd" gracePeriod=10 Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.815940 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-89w5f"] Jan 23 08:15:33 crc kubenswrapper[4982]: E0123 08:15:33.816436 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerName="dnsmasq-dns" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.816462 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerName="dnsmasq-dns" Jan 23 08:15:33 crc kubenswrapper[4982]: E0123 08:15:33.816480 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerName="dnsmasq-dns" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.816489 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerName="dnsmasq-dns" Jan 23 08:15:33 crc kubenswrapper[4982]: E0123 08:15:33.816509 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerName="init" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.816517 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerName="init" Jan 23 08:15:33 crc kubenswrapper[4982]: E0123 08:15:33.816541 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerName="init" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.816548 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerName="init" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.816801 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d4dfa64-9515-462d-a0dc-e9a61b95e022" containerName="dnsmasq-dns" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.816824 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fbf7b6-b095-4e1b-b23c-e5c937ef5af0" containerName="dnsmasq-dns" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.818112 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.929877 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-89w5f"] Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.998311 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-config\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.998386 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.998413 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.998493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2nkl\" (UniqueName: \"kubernetes.io/projected/f61e33e5-edf4-4387-92ff-f96f7b0b898c-kube-api-access-r2nkl\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:33 crc kubenswrapper[4982]: I0123 08:15:33.998534 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.101685 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-config\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.101757 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.101784 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.101861 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2nkl\" (UniqueName: \"kubernetes.io/projected/f61e33e5-edf4-4387-92ff-f96f7b0b898c-kube-api-access-r2nkl\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.101903 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.102848 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.103429 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-config\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.104303 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.104402 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.157894 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2nkl\" (UniqueName: \"kubernetes.io/projected/f61e33e5-edf4-4387-92ff-f96f7b0b898c-kube-api-access-r2nkl\") pod \"dnsmasq-dns-67fdf7998c-89w5f\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.234959 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.300724 4982 generic.go:334] "Generic (PLEG): container finished" podID="f0c15943-2175-4854-bb72-ebdbc6833743" containerID="d8d5274a44b413a7337658e8c8a1cdb93721aa9650720a829df365863e2285cd" exitCode=0 Jan 23 08:15:34 crc kubenswrapper[4982]: I0123 08:15:34.300784 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" event={"ID":"f0c15943-2175-4854-bb72-ebdbc6833743","Type":"ContainerDied","Data":"d8d5274a44b413a7337658e8c8a1cdb93721aa9650720a829df365863e2285cd"} Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.069220 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.075250 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.078059 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.078101 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.079431 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.079585 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-kq2tm" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.089280 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.224483 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.224720 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.224828 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.224920 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-lock\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.225046 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-cache\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.225182 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmxpc\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-kube-api-access-mmxpc\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327052 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327145 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327168 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-lock\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327201 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-cache\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327241 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmxpc\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-kube-api-access-mmxpc\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327305 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: E0123 08:15:35.327302 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:15:35 crc kubenswrapper[4982]: E0123 08:15:35.327341 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 08:15:35 crc kubenswrapper[4982]: E0123 08:15:35.327408 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift podName:5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:35.827381547 +0000 UTC m=+1210.307744983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift") pod "swift-storage-0" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1") : configmap "swift-ring-files" not found Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327729 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-lock\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-cache\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.327839 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.335665 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.346949 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmxpc\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-kube-api-access-mmxpc\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.352777 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.815156 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-mqmqp"] Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.817030 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.818858 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.819365 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.819781 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.825576 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mqmqp"] Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.837700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:35 crc kubenswrapper[4982]: E0123 08:15:35.837907 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:15:35 crc kubenswrapper[4982]: E0123 08:15:35.837927 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 08:15:35 crc kubenswrapper[4982]: E0123 08:15:35.837989 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift podName:5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:36.837967655 +0000 UTC m=+1211.318331041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift") pod "swift-storage-0" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1") : configmap "swift-ring-files" not found Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.940194 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-scripts\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.940266 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-combined-ca-bundle\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.940304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldf9\" (UniqueName: \"kubernetes.io/projected/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-kube-api-access-cldf9\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.940427 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-dispersionconf\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.940661 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-ring-data-devices\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.940995 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-etc-swift\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:35 crc kubenswrapper[4982]: I0123 08:15:35.941044 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-swiftconf\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.043146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-ring-data-devices\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.043334 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-etc-swift\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.043373 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-swiftconf\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.043413 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-scripts\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.043442 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-combined-ca-bundle\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.043468 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cldf9\" (UniqueName: \"kubernetes.io/projected/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-kube-api-access-cldf9\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.043503 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-dispersionconf\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.044071 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-etc-swift\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.044757 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-ring-data-devices\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.044938 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-scripts\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.048407 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-dispersionconf\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.049335 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-combined-ca-bundle\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.050046 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-swiftconf\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.067892 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cldf9\" (UniqueName: \"kubernetes.io/projected/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-kube-api-access-cldf9\") pod \"swift-ring-rebalance-mqmqp\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.194845 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:36 crc kubenswrapper[4982]: I0123 08:15:36.858466 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:36 crc kubenswrapper[4982]: E0123 08:15:36.858770 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:15:36 crc kubenswrapper[4982]: E0123 08:15:36.859106 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 08:15:36 crc kubenswrapper[4982]: E0123 08:15:36.859193 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift podName:5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:38.859167 +0000 UTC m=+1213.339530386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift") pod "swift-storage-0" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1") : configmap "swift-ring-files" not found Jan 23 08:15:37 crc kubenswrapper[4982]: I0123 08:15:37.586577 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 08:15:37 crc kubenswrapper[4982]: I0123 08:15:37.592677 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-89w5f"] Jan 23 08:15:37 crc kubenswrapper[4982]: I0123 08:15:37.665869 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-mqmqp"] Jan 23 08:15:37 crc kubenswrapper[4982]: I0123 08:15:37.749793 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.333614 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" event={"ID":"f0c15943-2175-4854-bb72-ebdbc6833743","Type":"ContainerDied","Data":"07c4c19baa588690f57550700ead96de9023c252c95de2e299d88cc24b185f14"} Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.333954 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c4c19baa588690f57550700ead96de9023c252c95de2e299d88cc24b185f14" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.335186 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" event={"ID":"f61e33e5-edf4-4387-92ff-f96f7b0b898c","Type":"ContainerStarted","Data":"e9332a188cb6e2e0cd788cf158ed8912ae3846dbadeed2db1a52359a6877389b"} Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.336715 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mqmqp" event={"ID":"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d","Type":"ContainerStarted","Data":"869a17742878d32c48f3118c54cbecd48a87a64b2382c9eaf2da601085fa6348"} Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.512388 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.619240 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-ovsdbserver-sb\") pod \"f0c15943-2175-4854-bb72-ebdbc6833743\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.620751 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-config\") pod \"f0c15943-2175-4854-bb72-ebdbc6833743\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.620971 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4psx\" (UniqueName: \"kubernetes.io/projected/f0c15943-2175-4854-bb72-ebdbc6833743-kube-api-access-v4psx\") pod \"f0c15943-2175-4854-bb72-ebdbc6833743\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.621117 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-dns-svc\") pod \"f0c15943-2175-4854-bb72-ebdbc6833743\" (UID: \"f0c15943-2175-4854-bb72-ebdbc6833743\") " Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.631929 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c15943-2175-4854-bb72-ebdbc6833743-kube-api-access-v4psx" (OuterVolumeSpecName: "kube-api-access-v4psx") pod "f0c15943-2175-4854-bb72-ebdbc6833743" (UID: "f0c15943-2175-4854-bb72-ebdbc6833743"). InnerVolumeSpecName "kube-api-access-v4psx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.700036 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-config" (OuterVolumeSpecName: "config") pod "f0c15943-2175-4854-bb72-ebdbc6833743" (UID: "f0c15943-2175-4854-bb72-ebdbc6833743"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.702502 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0c15943-2175-4854-bb72-ebdbc6833743" (UID: "f0c15943-2175-4854-bb72-ebdbc6833743"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.704861 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.713116 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0c15943-2175-4854-bb72-ebdbc6833743" (UID: "f0c15943-2175-4854-bb72-ebdbc6833743"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.725931 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.725972 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.725988 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4psx\" (UniqueName: \"kubernetes.io/projected/f0c15943-2175-4854-bb72-ebdbc6833743-kube-api-access-v4psx\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.725999 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0c15943-2175-4854-bb72-ebdbc6833743-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:38 crc kubenswrapper[4982]: I0123 08:15:38.934869 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:38 crc kubenswrapper[4982]: E0123 08:15:38.935229 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:15:38 crc kubenswrapper[4982]: E0123 08:15:38.935253 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 08:15:38 crc kubenswrapper[4982]: E0123 08:15:38.935307 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift podName:5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:42.935289527 +0000 UTC m=+1217.415652913 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift") pod "swift-storage-0" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1") : configmap "swift-ring-files" not found Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.348594 4982 generic.go:334] "Generic (PLEG): container finished" podID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerID="4c9d47cd1cf364a2919e7fcb3f539cf3073d5acd20adc967ae18ae7a51dca66d" exitCode=0 Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.348771 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" event={"ID":"f61e33e5-edf4-4387-92ff-f96f7b0b898c","Type":"ContainerDied","Data":"4c9d47cd1cf364a2919e7fcb3f539cf3073d5acd20adc967ae18ae7a51dca66d"} Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.354657 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.356171 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06","Type":"ContainerStarted","Data":"1cdfe73af7e6305c3c12db7d9df0b35dbe1da5d1128309e49095c22ea2966020"} Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.356215 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06","Type":"ContainerStarted","Data":"9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670"} Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.356231 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.407180 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.85556242 podStartE2EDuration="11.407154878s" podCreationTimestamp="2026-01-23 08:15:28 +0000 UTC" firstStartedPulling="2026-01-23 08:15:29.679174049 +0000 UTC m=+1204.159537435" lastFinishedPulling="2026-01-23 08:15:38.230766507 +0000 UTC m=+1212.711129893" observedRunningTime="2026-01-23 08:15:39.391060041 +0000 UTC m=+1213.871423427" watchObservedRunningTime="2026-01-23 08:15:39.407154878 +0000 UTC m=+1213.887518254" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.435406 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-p4vx9"] Jan 23 08:15:39 crc kubenswrapper[4982]: E0123 08:15:39.435964 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" containerName="dnsmasq-dns" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.435984 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" containerName="dnsmasq-dns" Jan 23 08:15:39 crc kubenswrapper[4982]: E0123 08:15:39.436013 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" containerName="init" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.436020 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" containerName="init" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.436302 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" containerName="dnsmasq-dns" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.437100 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.439616 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.451678 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p4vx9"] Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.485571 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-94b6f"] Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.503004 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b79764b65-94b6f"] Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.546453 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81181903-0754-473b-8900-d7398664ea21-operator-scripts\") pod \"root-account-create-update-p4vx9\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.546540 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmj9n\" (UniqueName: \"kubernetes.io/projected/81181903-0754-473b-8900-d7398664ea21-kube-api-access-kmj9n\") pod \"root-account-create-update-p4vx9\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.547404 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.547438 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.648173 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmj9n\" (UniqueName: \"kubernetes.io/projected/81181903-0754-473b-8900-d7398664ea21-kube-api-access-kmj9n\") pod \"root-account-create-update-p4vx9\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.648428 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81181903-0754-473b-8900-d7398664ea21-operator-scripts\") pod \"root-account-create-update-p4vx9\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.650930 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81181903-0754-473b-8900-d7398664ea21-operator-scripts\") pod \"root-account-create-update-p4vx9\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.666820 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmj9n\" (UniqueName: \"kubernetes.io/projected/81181903-0754-473b-8900-d7398664ea21-kube-api-access-kmj9n\") pod \"root-account-create-update-p4vx9\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.671449 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.834854 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:39 crc kubenswrapper[4982]: I0123 08:15:39.840735 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" path="/var/lib/kubelet/pods/f0c15943-2175-4854-bb72-ebdbc6833743/volumes" Jan 23 08:15:40 crc kubenswrapper[4982]: I0123 08:15:40.365374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" event={"ID":"f61e33e5-edf4-4387-92ff-f96f7b0b898c","Type":"ContainerStarted","Data":"72960fc5fa18b86c29b0f7bc101aa575bc0b09dacc769f230125984648aa5d4c"} Jan 23 08:15:40 crc kubenswrapper[4982]: I0123 08:15:40.388037 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" podStartSLOduration=7.388016614 podStartE2EDuration="7.388016614s" podCreationTimestamp="2026-01-23 08:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:40.384040079 +0000 UTC m=+1214.864403475" watchObservedRunningTime="2026-01-23 08:15:40.388016614 +0000 UTC m=+1214.868380000" Jan 23 08:15:40 crc kubenswrapper[4982]: I0123 08:15:40.470406 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 08:15:40 crc kubenswrapper[4982]: I0123 08:15:40.981236 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ee09-account-create-update-vv5hs"] Jan 23 08:15:40 crc kubenswrapper[4982]: I0123 08:15:40.982802 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:40 crc kubenswrapper[4982]: I0123 08:15:40.986572 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 08:15:40 crc kubenswrapper[4982]: I0123 08:15:40.997004 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ee09-account-create-update-vv5hs"] Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.075169 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3351eff8-fcf1-4b8f-a12d-420d953010f0-operator-scripts\") pod \"placement-ee09-account-create-update-vv5hs\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.075227 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zsv8\" (UniqueName: \"kubernetes.io/projected/3351eff8-fcf1-4b8f-a12d-420d953010f0-kube-api-access-4zsv8\") pod \"placement-ee09-account-create-update-vv5hs\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.088792 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3155-account-create-update-p7dt8"] Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.092595 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.096722 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.106912 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3155-account-create-update-p7dt8"] Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.177681 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4qn5\" (UniqueName: \"kubernetes.io/projected/19b13975-a675-44f5-a248-e806fe3729fe-kube-api-access-m4qn5\") pod \"keystone-3155-account-create-update-p7dt8\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.178013 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b13975-a675-44f5-a248-e806fe3729fe-operator-scripts\") pod \"keystone-3155-account-create-update-p7dt8\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.178098 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3351eff8-fcf1-4b8f-a12d-420d953010f0-operator-scripts\") pod \"placement-ee09-account-create-update-vv5hs\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.178132 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zsv8\" (UniqueName: \"kubernetes.io/projected/3351eff8-fcf1-4b8f-a12d-420d953010f0-kube-api-access-4zsv8\") pod \"placement-ee09-account-create-update-vv5hs\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.178779 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3351eff8-fcf1-4b8f-a12d-420d953010f0-operator-scripts\") pod \"placement-ee09-account-create-update-vv5hs\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.197488 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kjfz5"] Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.200178 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.211226 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zsv8\" (UniqueName: \"kubernetes.io/projected/3351eff8-fcf1-4b8f-a12d-420d953010f0-kube-api-access-4zsv8\") pod \"placement-ee09-account-create-update-vv5hs\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.223351 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kjfz5"] Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.283129 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-operator-scripts\") pod \"placement-db-create-kjfz5\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.283202 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt98p\" (UniqueName: \"kubernetes.io/projected/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-kube-api-access-kt98p\") pod \"placement-db-create-kjfz5\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.283238 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b13975-a675-44f5-a248-e806fe3729fe-operator-scripts\") pod \"keystone-3155-account-create-update-p7dt8\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.283430 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4qn5\" (UniqueName: \"kubernetes.io/projected/19b13975-a675-44f5-a248-e806fe3729fe-kube-api-access-m4qn5\") pod \"keystone-3155-account-create-update-p7dt8\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.285274 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b13975-a675-44f5-a248-e806fe3729fe-operator-scripts\") pod \"keystone-3155-account-create-update-p7dt8\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.317583 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.318819 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dgx4m"] Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.320412 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.330592 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dgx4m"] Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.335775 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4qn5\" (UniqueName: \"kubernetes.io/projected/19b13975-a675-44f5-a248-e806fe3729fe-kube-api-access-m4qn5\") pod \"keystone-3155-account-create-update-p7dt8\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.377928 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.385120 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l945l\" (UniqueName: \"kubernetes.io/projected/d53cbcd7-b691-433f-a0fa-fcd319658c1b-kube-api-access-l945l\") pod \"keystone-db-create-dgx4m\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.385188 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-operator-scripts\") pod \"placement-db-create-kjfz5\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.385212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt98p\" (UniqueName: \"kubernetes.io/projected/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-kube-api-access-kt98p\") pod \"placement-db-create-kjfz5\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.385398 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53cbcd7-b691-433f-a0fa-fcd319658c1b-operator-scripts\") pod \"keystone-db-create-dgx4m\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.386549 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-operator-scripts\") pod \"placement-db-create-kjfz5\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.402317 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt98p\" (UniqueName: \"kubernetes.io/projected/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-kube-api-access-kt98p\") pod \"placement-db-create-kjfz5\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.414361 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.435996 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.436076 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.488838 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l945l\" (UniqueName: \"kubernetes.io/projected/d53cbcd7-b691-433f-a0fa-fcd319658c1b-kube-api-access-l945l\") pod \"keystone-db-create-dgx4m\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.489009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53cbcd7-b691-433f-a0fa-fcd319658c1b-operator-scripts\") pod \"keystone-db-create-dgx4m\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.489953 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53cbcd7-b691-433f-a0fa-fcd319658c1b-operator-scripts\") pod \"keystone-db-create-dgx4m\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.508975 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l945l\" (UniqueName: \"kubernetes.io/projected/d53cbcd7-b691-433f-a0fa-fcd319658c1b-kube-api-access-l945l\") pod \"keystone-db-create-dgx4m\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.625413 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:41 crc kubenswrapper[4982]: I0123 08:15:41.687270 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:42 crc kubenswrapper[4982]: W0123 08:15:42.912611 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81181903_0754_473b_8900_d7398664ea21.slice/crio-900da53b923bbc00191751af2f528515fc2e0b0b24bedf08aa2e407a6b2ae98b WatchSource:0}: Error finding container 900da53b923bbc00191751af2f528515fc2e0b0b24bedf08aa2e407a6b2ae98b: Status 404 returned error can't find the container with id 900da53b923bbc00191751af2f528515fc2e0b0b24bedf08aa2e407a6b2ae98b Jan 23 08:15:42 crc kubenswrapper[4982]: I0123 08:15:42.912999 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-p4vx9"] Jan 23 08:15:42 crc kubenswrapper[4982]: I0123 08:15:42.929158 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b79764b65-94b6f" podUID="f0c15943-2175-4854-bb72-ebdbc6833743" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.138:5353: i/o timeout" Jan 23 08:15:42 crc kubenswrapper[4982]: I0123 08:15:42.935926 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:42 crc kubenswrapper[4982]: E0123 08:15:42.936173 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:15:42 crc kubenswrapper[4982]: E0123 08:15:42.936214 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 08:15:42 crc kubenswrapper[4982]: E0123 08:15:42.936279 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift podName:5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1 nodeName:}" failed. No retries permitted until 2026-01-23 08:15:50.93625803 +0000 UTC m=+1225.416621416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift") pod "swift-storage-0" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1") : configmap "swift-ring-files" not found Jan 23 08:15:43 crc kubenswrapper[4982]: I0123 08:15:43.068566 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ee09-account-create-update-vv5hs"] Jan 23 08:15:43 crc kubenswrapper[4982]: I0123 08:15:43.167188 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dgx4m"] Jan 23 08:15:43 crc kubenswrapper[4982]: I0123 08:15:43.178158 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3155-account-create-update-p7dt8"] Jan 23 08:15:43 crc kubenswrapper[4982]: I0123 08:15:43.186557 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kjfz5"] Jan 23 08:15:43 crc kubenswrapper[4982]: I0123 08:15:43.396276 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p4vx9" event={"ID":"81181903-0754-473b-8900-d7398664ea21","Type":"ContainerStarted","Data":"900da53b923bbc00191751af2f528515fc2e0b0b24bedf08aa2e407a6b2ae98b"} Jan 23 08:15:43 crc kubenswrapper[4982]: W0123 08:15:43.785338 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3351eff8_fcf1_4b8f_a12d_420d953010f0.slice/crio-7270ff4012c899bda946a6cd66d50d1a491d101b29e1e3bac17a4bcfce8a5ceb WatchSource:0}: Error finding container 7270ff4012c899bda946a6cd66d50d1a491d101b29e1e3bac17a4bcfce8a5ceb: Status 404 returned error can't find the container with id 7270ff4012c899bda946a6cd66d50d1a491d101b29e1e3bac17a4bcfce8a5ceb Jan 23 08:15:43 crc kubenswrapper[4982]: W0123 08:15:43.788903 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd53cbcd7_b691_433f_a0fa_fcd319658c1b.slice/crio-d6473fffc363fee89e9dcd4584faf3ccf037a8dadea42a0db49dddbe410c2871 WatchSource:0}: Error finding container d6473fffc363fee89e9dcd4584faf3ccf037a8dadea42a0db49dddbe410c2871: Status 404 returned error can't find the container with id d6473fffc363fee89e9dcd4584faf3ccf037a8dadea42a0db49dddbe410c2871 Jan 23 08:15:43 crc kubenswrapper[4982]: W0123 08:15:43.801391 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod422260e7_5cc6_4ad1_9b7b_d3eccd13fa4c.slice/crio-b727d50f506d0db6253d2dd0ff16433f941cfb47a77aa89213bbf6056e279867 WatchSource:0}: Error finding container b727d50f506d0db6253d2dd0ff16433f941cfb47a77aa89213bbf6056e279867: Status 404 returned error can't find the container with id b727d50f506d0db6253d2dd0ff16433f941cfb47a77aa89213bbf6056e279867 Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.237852 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.324651 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-m2lcw"] Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.324960 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" podUID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerName="dnsmasq-dns" containerID="cri-o://9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285" gracePeriod=10 Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.427305 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dgx4m" event={"ID":"d53cbcd7-b691-433f-a0fa-fcd319658c1b","Type":"ContainerStarted","Data":"f883d2cd12e8bbc93ce18594c30be9e5a82eb7cddb1d1c8ec54945077ddf5a84"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.427400 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dgx4m" event={"ID":"d53cbcd7-b691-433f-a0fa-fcd319658c1b","Type":"ContainerStarted","Data":"d6473fffc363fee89e9dcd4584faf3ccf037a8dadea42a0db49dddbe410c2871"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.431895 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p4vx9" event={"ID":"81181903-0754-473b-8900-d7398664ea21","Type":"ContainerStarted","Data":"4e5ecc1d662fcad4e262e1878c9f735e49eb6a3856bf628eedbc222fa5f0bff6"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.433840 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mqmqp" event={"ID":"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d","Type":"ContainerStarted","Data":"64e0007da8498c9c0f6c2a2120c194c4dd9b01a8bed0b734b2071d6d4123d58b"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.436686 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee09-account-create-update-vv5hs" event={"ID":"3351eff8-fcf1-4b8f-a12d-420d953010f0","Type":"ContainerStarted","Data":"676ce9253eff14b373d9ee4a70619b52f039634803a44d0ea713217ce8075542"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.436769 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee09-account-create-update-vv5hs" event={"ID":"3351eff8-fcf1-4b8f-a12d-420d953010f0","Type":"ContainerStarted","Data":"7270ff4012c899bda946a6cd66d50d1a491d101b29e1e3bac17a4bcfce8a5ceb"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.450017 4982 generic.go:334] "Generic (PLEG): container finished" podID="422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c" containerID="1426f70b9f3cd5368570e826cee82762c724ee2dd3f610396a5107e0fe916aea" exitCode=0 Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.450165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjfz5" event={"ID":"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c","Type":"ContainerDied","Data":"1426f70b9f3cd5368570e826cee82762c724ee2dd3f610396a5107e0fe916aea"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.450222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjfz5" event={"ID":"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c","Type":"ContainerStarted","Data":"b727d50f506d0db6253d2dd0ff16433f941cfb47a77aa89213bbf6056e279867"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.465059 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-dgx4m" podStartSLOduration=3.4650124829999998 podStartE2EDuration="3.465012483s" podCreationTimestamp="2026-01-23 08:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:44.442348732 +0000 UTC m=+1218.922712118" watchObservedRunningTime="2026-01-23 08:15:44.465012483 +0000 UTC m=+1218.945375889" Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.478281 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ee09-account-create-update-vv5hs" podStartSLOduration=4.478234493 podStartE2EDuration="4.478234493s" podCreationTimestamp="2026-01-23 08:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:44.465904297 +0000 UTC m=+1218.946267683" watchObservedRunningTime="2026-01-23 08:15:44.478234493 +0000 UTC m=+1218.958597899" Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.479069 4982 generic.go:334] "Generic (PLEG): container finished" podID="19b13975-a675-44f5-a248-e806fe3729fe" containerID="ab46f0bb82a7f706ae640b6fe177c3d1384928d9dd89e36ffabd395368fe830c" exitCode=0 Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.479121 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3155-account-create-update-p7dt8" event={"ID":"19b13975-a675-44f5-a248-e806fe3729fe","Type":"ContainerDied","Data":"ab46f0bb82a7f706ae640b6fe177c3d1384928d9dd89e36ffabd395368fe830c"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.479160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3155-account-create-update-p7dt8" event={"ID":"19b13975-a675-44f5-a248-e806fe3729fe","Type":"ContainerStarted","Data":"2f4827b03f0c3cd65cc3968c2eb4feadcc4f25df6dda08d805729d03476ae31e"} Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.495380 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-mqmqp" podStartSLOduration=5.143303416 podStartE2EDuration="9.495345937s" podCreationTimestamp="2026-01-23 08:15:35 +0000 UTC" firstStartedPulling="2026-01-23 08:15:38.194125336 +0000 UTC m=+1212.674488722" lastFinishedPulling="2026-01-23 08:15:42.546167857 +0000 UTC m=+1217.026531243" observedRunningTime="2026-01-23 08:15:44.484250363 +0000 UTC m=+1218.964613749" watchObservedRunningTime="2026-01-23 08:15:44.495345937 +0000 UTC m=+1218.975709323" Jan 23 08:15:44 crc kubenswrapper[4982]: I0123 08:15:44.525424 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-p4vx9" podStartSLOduration=5.525391204 podStartE2EDuration="5.525391204s" podCreationTimestamp="2026-01-23 08:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:44.504732796 +0000 UTC m=+1218.985096312" watchObservedRunningTime="2026-01-23 08:15:44.525391204 +0000 UTC m=+1219.005754600" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.138946 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.287170 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-dns-svc\") pod \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.287882 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-config\") pod \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.288070 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-nb\") pod \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.288636 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drmxn\" (UniqueName: \"kubernetes.io/projected/7b41f0a6-0b83-4384-b3d2-4b26942f7152-kube-api-access-drmxn\") pod \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.288778 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-sb\") pod \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\" (UID: \"7b41f0a6-0b83-4384-b3d2-4b26942f7152\") " Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.296430 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b41f0a6-0b83-4384-b3d2-4b26942f7152-kube-api-access-drmxn" (OuterVolumeSpecName: "kube-api-access-drmxn") pod "7b41f0a6-0b83-4384-b3d2-4b26942f7152" (UID: "7b41f0a6-0b83-4384-b3d2-4b26942f7152"). InnerVolumeSpecName "kube-api-access-drmxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.341101 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-config" (OuterVolumeSpecName: "config") pod "7b41f0a6-0b83-4384-b3d2-4b26942f7152" (UID: "7b41f0a6-0b83-4384-b3d2-4b26942f7152"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.350111 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b41f0a6-0b83-4384-b3d2-4b26942f7152" (UID: "7b41f0a6-0b83-4384-b3d2-4b26942f7152"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.352394 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b41f0a6-0b83-4384-b3d2-4b26942f7152" (UID: "7b41f0a6-0b83-4384-b3d2-4b26942f7152"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.362523 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b41f0a6-0b83-4384-b3d2-4b26942f7152" (UID: "7b41f0a6-0b83-4384-b3d2-4b26942f7152"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.399140 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.399461 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.399555 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.399668 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b41f0a6-0b83-4384-b3d2-4b26942f7152-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.399758 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drmxn\" (UniqueName: \"kubernetes.io/projected/7b41f0a6-0b83-4384-b3d2-4b26942f7152-kube-api-access-drmxn\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.491597 4982 generic.go:334] "Generic (PLEG): container finished" podID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerID="9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285" exitCode=0 Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.491690 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.491703 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" event={"ID":"7b41f0a6-0b83-4384-b3d2-4b26942f7152","Type":"ContainerDied","Data":"9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285"} Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.492167 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-m2lcw" event={"ID":"7b41f0a6-0b83-4384-b3d2-4b26942f7152","Type":"ContainerDied","Data":"600a5c3e93dd6541297ebd2008c2cd4a8c0570ee72dfae3d6cf695be2c0415e0"} Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.492203 4982 scope.go:117] "RemoveContainer" containerID="9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.504067 4982 generic.go:334] "Generic (PLEG): container finished" podID="81181903-0754-473b-8900-d7398664ea21" containerID="4e5ecc1d662fcad4e262e1878c9f735e49eb6a3856bf628eedbc222fa5f0bff6" exitCode=0 Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.504261 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p4vx9" event={"ID":"81181903-0754-473b-8900-d7398664ea21","Type":"ContainerDied","Data":"4e5ecc1d662fcad4e262e1878c9f735e49eb6a3856bf628eedbc222fa5f0bff6"} Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.508198 4982 generic.go:334] "Generic (PLEG): container finished" podID="3351eff8-fcf1-4b8f-a12d-420d953010f0" containerID="676ce9253eff14b373d9ee4a70619b52f039634803a44d0ea713217ce8075542" exitCode=0 Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.508350 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee09-account-create-update-vv5hs" event={"ID":"3351eff8-fcf1-4b8f-a12d-420d953010f0","Type":"ContainerDied","Data":"676ce9253eff14b373d9ee4a70619b52f039634803a44d0ea713217ce8075542"} Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.517949 4982 generic.go:334] "Generic (PLEG): container finished" podID="d53cbcd7-b691-433f-a0fa-fcd319658c1b" containerID="f883d2cd12e8bbc93ce18594c30be9e5a82eb7cddb1d1c8ec54945077ddf5a84" exitCode=0 Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.518079 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dgx4m" event={"ID":"d53cbcd7-b691-433f-a0fa-fcd319658c1b","Type":"ContainerDied","Data":"f883d2cd12e8bbc93ce18594c30be9e5a82eb7cddb1d1c8ec54945077ddf5a84"} Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.528511 4982 scope.go:117] "RemoveContainer" containerID="bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.557737 4982 scope.go:117] "RemoveContainer" containerID="9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285" Jan 23 08:15:45 crc kubenswrapper[4982]: E0123 08:15:45.558349 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285\": container with ID starting with 9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285 not found: ID does not exist" containerID="9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.558410 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285"} err="failed to get container status \"9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285\": rpc error: code = NotFound desc = could not find container \"9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285\": container with ID starting with 9e02c418604196db3533081ad7a71d4c15c6a7c7d581caedd06eb25ef09e6285 not found: ID does not exist" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.558449 4982 scope.go:117] "RemoveContainer" containerID="bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab" Jan 23 08:15:45 crc kubenswrapper[4982]: E0123 08:15:45.572873 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab\": container with ID starting with bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab not found: ID does not exist" containerID="bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.572937 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab"} err="failed to get container status \"bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab\": rpc error: code = NotFound desc = could not find container \"bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab\": container with ID starting with bedba3b99421f5b51cc77b30c97f07d90f8f2b39292f21fb53532e0894becaab not found: ID does not exist" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.594219 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-m2lcw"] Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.609307 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-m2lcw"] Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.841864 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" path="/var/lib/kubelet/pods/7b41f0a6-0b83-4384-b3d2-4b26942f7152/volumes" Jan 23 08:15:45 crc kubenswrapper[4982]: I0123 08:15:45.886155 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.010883 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-operator-scripts\") pod \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.010955 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt98p\" (UniqueName: \"kubernetes.io/projected/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-kube-api-access-kt98p\") pod \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\" (UID: \"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c\") " Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.012511 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c" (UID: "422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.015601 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-kube-api-access-kt98p" (OuterVolumeSpecName: "kube-api-access-kt98p") pod "422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c" (UID: "422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c"). InnerVolumeSpecName "kube-api-access-kt98p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.046431 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.114526 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4qn5\" (UniqueName: \"kubernetes.io/projected/19b13975-a675-44f5-a248-e806fe3729fe-kube-api-access-m4qn5\") pod \"19b13975-a675-44f5-a248-e806fe3729fe\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.114939 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b13975-a675-44f5-a248-e806fe3729fe-operator-scripts\") pod \"19b13975-a675-44f5-a248-e806fe3729fe\" (UID: \"19b13975-a675-44f5-a248-e806fe3729fe\") " Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.115849 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19b13975-a675-44f5-a248-e806fe3729fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19b13975-a675-44f5-a248-e806fe3729fe" (UID: "19b13975-a675-44f5-a248-e806fe3729fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.116488 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.116508 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt98p\" (UniqueName: \"kubernetes.io/projected/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c-kube-api-access-kt98p\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.116523 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19b13975-a675-44f5-a248-e806fe3729fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.117563 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19b13975-a675-44f5-a248-e806fe3729fe-kube-api-access-m4qn5" (OuterVolumeSpecName: "kube-api-access-m4qn5") pod "19b13975-a675-44f5-a248-e806fe3729fe" (UID: "19b13975-a675-44f5-a248-e806fe3729fe"). InnerVolumeSpecName "kube-api-access-m4qn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.216443 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gmmx2"] Jan 23 08:15:46 crc kubenswrapper[4982]: E0123 08:15:46.217158 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c" containerName="mariadb-database-create" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.217182 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c" containerName="mariadb-database-create" Jan 23 08:15:46 crc kubenswrapper[4982]: E0123 08:15:46.217199 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19b13975-a675-44f5-a248-e806fe3729fe" containerName="mariadb-account-create-update" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.217206 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="19b13975-a675-44f5-a248-e806fe3729fe" containerName="mariadb-account-create-update" Jan 23 08:15:46 crc kubenswrapper[4982]: E0123 08:15:46.217222 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerName="init" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.217232 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerName="init" Jan 23 08:15:46 crc kubenswrapper[4982]: E0123 08:15:46.217249 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerName="dnsmasq-dns" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.217255 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerName="dnsmasq-dns" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.217456 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b41f0a6-0b83-4384-b3d2-4b26942f7152" containerName="dnsmasq-dns" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.217511 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c" containerName="mariadb-database-create" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.217524 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b13975-a675-44f5-a248-e806fe3729fe" containerName="mariadb-account-create-update" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.218306 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.222285 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4qn5\" (UniqueName: \"kubernetes.io/projected/19b13975-a675-44f5-a248-e806fe3729fe-kube-api-access-m4qn5\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.228027 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gmmx2"] Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.323479 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-operator-scripts\") pod \"glance-db-create-gmmx2\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.323710 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmlt\" (UniqueName: \"kubernetes.io/projected/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-kube-api-access-kfmlt\") pod \"glance-db-create-gmmx2\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.324700 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8122-account-create-update-qdxp2"] Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.326357 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.329757 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.340528 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8122-account-create-update-qdxp2"] Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.425889 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmlt\" (UniqueName: \"kubernetes.io/projected/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-kube-api-access-kfmlt\") pod \"glance-db-create-gmmx2\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.426018 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-operator-scripts\") pod \"glance-db-create-gmmx2\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.426052 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-operator-scripts\") pod \"glance-8122-account-create-update-qdxp2\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.426102 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfl5s\" (UniqueName: \"kubernetes.io/projected/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-kube-api-access-gfl5s\") pod \"glance-8122-account-create-update-qdxp2\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.427038 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-operator-scripts\") pod \"glance-db-create-gmmx2\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.459519 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmlt\" (UniqueName: \"kubernetes.io/projected/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-kube-api-access-kfmlt\") pod \"glance-db-create-gmmx2\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.527429 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-operator-scripts\") pod \"glance-8122-account-create-update-qdxp2\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.527499 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfl5s\" (UniqueName: \"kubernetes.io/projected/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-kube-api-access-gfl5s\") pod \"glance-8122-account-create-update-qdxp2\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.528269 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-operator-scripts\") pod \"glance-8122-account-create-update-qdxp2\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.528805 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kjfz5" event={"ID":"422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c","Type":"ContainerDied","Data":"b727d50f506d0db6253d2dd0ff16433f941cfb47a77aa89213bbf6056e279867"} Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.528839 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b727d50f506d0db6253d2dd0ff16433f941cfb47a77aa89213bbf6056e279867" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.529071 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kjfz5" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.530176 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3155-account-create-update-p7dt8" event={"ID":"19b13975-a675-44f5-a248-e806fe3729fe","Type":"ContainerDied","Data":"2f4827b03f0c3cd65cc3968c2eb4feadcc4f25df6dda08d805729d03476ae31e"} Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.530215 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-p7dt8" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.530238 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f4827b03f0c3cd65cc3968c2eb4feadcc4f25df6dda08d805729d03476ae31e" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.542980 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.545982 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfl5s\" (UniqueName: \"kubernetes.io/projected/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-kube-api-access-gfl5s\") pod \"glance-8122-account-create-update-qdxp2\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.646400 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:46 crc kubenswrapper[4982]: I0123 08:15:46.902151 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.035417 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.036808 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l945l\" (UniqueName: \"kubernetes.io/projected/d53cbcd7-b691-433f-a0fa-fcd319658c1b-kube-api-access-l945l\") pod \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.036856 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53cbcd7-b691-433f-a0fa-fcd319658c1b-operator-scripts\") pod \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\" (UID: \"d53cbcd7-b691-433f-a0fa-fcd319658c1b\") " Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.038310 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d53cbcd7-b691-433f-a0fa-fcd319658c1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d53cbcd7-b691-433f-a0fa-fcd319658c1b" (UID: "d53cbcd7-b691-433f-a0fa-fcd319658c1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.045356 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d53cbcd7-b691-433f-a0fa-fcd319658c1b-kube-api-access-l945l" (OuterVolumeSpecName: "kube-api-access-l945l") pod "d53cbcd7-b691-433f-a0fa-fcd319658c1b" (UID: "d53cbcd7-b691-433f-a0fa-fcd319658c1b"). InnerVolumeSpecName "kube-api-access-l945l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.059379 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.138396 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmj9n\" (UniqueName: \"kubernetes.io/projected/81181903-0754-473b-8900-d7398664ea21-kube-api-access-kmj9n\") pod \"81181903-0754-473b-8900-d7398664ea21\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.139814 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3351eff8-fcf1-4b8f-a12d-420d953010f0-operator-scripts\") pod \"3351eff8-fcf1-4b8f-a12d-420d953010f0\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.140187 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zsv8\" (UniqueName: \"kubernetes.io/projected/3351eff8-fcf1-4b8f-a12d-420d953010f0-kube-api-access-4zsv8\") pod \"3351eff8-fcf1-4b8f-a12d-420d953010f0\" (UID: \"3351eff8-fcf1-4b8f-a12d-420d953010f0\") " Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.140399 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81181903-0754-473b-8900-d7398664ea21-operator-scripts\") pod \"81181903-0754-473b-8900-d7398664ea21\" (UID: \"81181903-0754-473b-8900-d7398664ea21\") " Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.141129 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3351eff8-fcf1-4b8f-a12d-420d953010f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3351eff8-fcf1-4b8f-a12d-420d953010f0" (UID: "3351eff8-fcf1-4b8f-a12d-420d953010f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.141581 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81181903-0754-473b-8900-d7398664ea21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81181903-0754-473b-8900-d7398664ea21" (UID: "81181903-0754-473b-8900-d7398664ea21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.142283 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3351eff8-fcf1-4b8f-a12d-420d953010f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.142320 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l945l\" (UniqueName: \"kubernetes.io/projected/d53cbcd7-b691-433f-a0fa-fcd319658c1b-kube-api-access-l945l\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.142337 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d53cbcd7-b691-433f-a0fa-fcd319658c1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.142351 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81181903-0754-473b-8900-d7398664ea21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.142859 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81181903-0754-473b-8900-d7398664ea21-kube-api-access-kmj9n" (OuterVolumeSpecName: "kube-api-access-kmj9n") pod "81181903-0754-473b-8900-d7398664ea21" (UID: "81181903-0754-473b-8900-d7398664ea21"). InnerVolumeSpecName "kube-api-access-kmj9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.150949 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3351eff8-fcf1-4b8f-a12d-420d953010f0-kube-api-access-4zsv8" (OuterVolumeSpecName: "kube-api-access-4zsv8") pod "3351eff8-fcf1-4b8f-a12d-420d953010f0" (UID: "3351eff8-fcf1-4b8f-a12d-420d953010f0"). InnerVolumeSpecName "kube-api-access-4zsv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.175960 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gmmx2"] Jan 23 08:15:47 crc kubenswrapper[4982]: W0123 08:15:47.179865 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bbfefec_4285_4b91_a9d8_5dc47c0efb34.slice/crio-78679916d1ceadb094ace9606e9b8a1073a9e3ddac6fc4fb59e759560f2b3ff3 WatchSource:0}: Error finding container 78679916d1ceadb094ace9606e9b8a1073a9e3ddac6fc4fb59e759560f2b3ff3: Status 404 returned error can't find the container with id 78679916d1ceadb094ace9606e9b8a1073a9e3ddac6fc4fb59e759560f2b3ff3 Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.243742 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zsv8\" (UniqueName: \"kubernetes.io/projected/3351eff8-fcf1-4b8f-a12d-420d953010f0-kube-api-access-4zsv8\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.243770 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmj9n\" (UniqueName: \"kubernetes.io/projected/81181903-0754-473b-8900-d7398664ea21-kube-api-access-kmj9n\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.308651 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8122-account-create-update-qdxp2"] Jan 23 08:15:47 crc kubenswrapper[4982]: W0123 08:15:47.312588 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d3dcf6f_a3f4_4cd8_848b_398bb9338b24.slice/crio-510bf4eb065ada213b27208c5830671dc9df028fb9415d2e88d5801c26bfa650 WatchSource:0}: Error finding container 510bf4eb065ada213b27208c5830671dc9df028fb9415d2e88d5801c26bfa650: Status 404 returned error can't find the container with id 510bf4eb065ada213b27208c5830671dc9df028fb9415d2e88d5801c26bfa650 Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.547027 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-p4vx9" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.547027 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-p4vx9" event={"ID":"81181903-0754-473b-8900-d7398664ea21","Type":"ContainerDied","Data":"900da53b923bbc00191751af2f528515fc2e0b0b24bedf08aa2e407a6b2ae98b"} Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.548280 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="900da53b923bbc00191751af2f528515fc2e0b0b24bedf08aa2e407a6b2ae98b" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.552455 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ee09-account-create-update-vv5hs" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.552495 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ee09-account-create-update-vv5hs" event={"ID":"3351eff8-fcf1-4b8f-a12d-420d953010f0","Type":"ContainerDied","Data":"7270ff4012c899bda946a6cd66d50d1a491d101b29e1e3bac17a4bcfce8a5ceb"} Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.552543 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7270ff4012c899bda946a6cd66d50d1a491d101b29e1e3bac17a4bcfce8a5ceb" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.559961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8122-account-create-update-qdxp2" event={"ID":"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24","Type":"ContainerStarted","Data":"8e9f68b22806c0d91023027c2f74867c11015aa6e629f98eba701236324fb716"} Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.560011 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8122-account-create-update-qdxp2" event={"ID":"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24","Type":"ContainerStarted","Data":"510bf4eb065ada213b27208c5830671dc9df028fb9415d2e88d5801c26bfa650"} Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.565584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmmx2" event={"ID":"5bbfefec-4285-4b91-a9d8-5dc47c0efb34","Type":"ContainerStarted","Data":"2005a7393e26c0e16e277225d7aa3a21d05deced5aba1019940b9c9376b287e6"} Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.565688 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmmx2" event={"ID":"5bbfefec-4285-4b91-a9d8-5dc47c0efb34","Type":"ContainerStarted","Data":"78679916d1ceadb094ace9606e9b8a1073a9e3ddac6fc4fb59e759560f2b3ff3"} Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.568318 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dgx4m" event={"ID":"d53cbcd7-b691-433f-a0fa-fcd319658c1b","Type":"ContainerDied","Data":"d6473fffc363fee89e9dcd4584faf3ccf037a8dadea42a0db49dddbe410c2871"} Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.568432 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6473fffc363fee89e9dcd4584faf3ccf037a8dadea42a0db49dddbe410c2871" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.568563 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dgx4m" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.582773 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8122-account-create-update-qdxp2" podStartSLOduration=1.582747147 podStartE2EDuration="1.582747147s" podCreationTimestamp="2026-01-23 08:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:47.574465818 +0000 UTC m=+1222.054829204" watchObservedRunningTime="2026-01-23 08:15:47.582747147 +0000 UTC m=+1222.063110533" Jan 23 08:15:47 crc kubenswrapper[4982]: I0123 08:15:47.604361 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-gmmx2" podStartSLOduration=1.6043236090000002 podStartE2EDuration="1.604323609s" podCreationTimestamp="2026-01-23 08:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:47.59719074 +0000 UTC m=+1222.077554126" watchObservedRunningTime="2026-01-23 08:15:47.604323609 +0000 UTC m=+1222.084686985" Jan 23 08:15:47 crc kubenswrapper[4982]: E0123 08:15:47.734004 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81181903_0754_473b_8900_d7398664ea21.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3351eff8_fcf1_4b8f_a12d_420d953010f0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd53cbcd7_b691_433f_a0fa_fcd319658c1b.slice\": RecentStats: unable to find data in memory cache]" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.060907 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-p4vx9"] Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.071084 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-p4vx9"] Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.155516 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4mtm7"] Jan 23 08:15:48 crc kubenswrapper[4982]: E0123 08:15:48.156127 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3351eff8-fcf1-4b8f-a12d-420d953010f0" containerName="mariadb-account-create-update" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.156157 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3351eff8-fcf1-4b8f-a12d-420d953010f0" containerName="mariadb-account-create-update" Jan 23 08:15:48 crc kubenswrapper[4982]: E0123 08:15:48.156192 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81181903-0754-473b-8900-d7398664ea21" containerName="mariadb-account-create-update" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.156201 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="81181903-0754-473b-8900-d7398664ea21" containerName="mariadb-account-create-update" Jan 23 08:15:48 crc kubenswrapper[4982]: E0123 08:15:48.156232 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d53cbcd7-b691-433f-a0fa-fcd319658c1b" containerName="mariadb-database-create" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.156242 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d53cbcd7-b691-433f-a0fa-fcd319658c1b" containerName="mariadb-database-create" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.156492 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="81181903-0754-473b-8900-d7398664ea21" containerName="mariadb-account-create-update" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.156551 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3351eff8-fcf1-4b8f-a12d-420d953010f0" containerName="mariadb-account-create-update" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.156566 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d53cbcd7-b691-433f-a0fa-fcd319658c1b" containerName="mariadb-database-create" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.157383 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.164710 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4mtm7"] Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.165050 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.262064 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435f47b7-cbb6-4f42-8a35-ea23c14e81df-operator-scripts\") pod \"root-account-create-update-4mtm7\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.262279 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n6lf\" (UniqueName: \"kubernetes.io/projected/435f47b7-cbb6-4f42-8a35-ea23c14e81df-kube-api-access-5n6lf\") pod \"root-account-create-update-4mtm7\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.363954 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435f47b7-cbb6-4f42-8a35-ea23c14e81df-operator-scripts\") pod \"root-account-create-update-4mtm7\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.364337 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n6lf\" (UniqueName: \"kubernetes.io/projected/435f47b7-cbb6-4f42-8a35-ea23c14e81df-kube-api-access-5n6lf\") pod \"root-account-create-update-4mtm7\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.366975 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435f47b7-cbb6-4f42-8a35-ea23c14e81df-operator-scripts\") pod \"root-account-create-update-4mtm7\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.385511 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n6lf\" (UniqueName: \"kubernetes.io/projected/435f47b7-cbb6-4f42-8a35-ea23c14e81df-kube-api-access-5n6lf\") pod \"root-account-create-update-4mtm7\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.474038 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.580023 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d3dcf6f-a3f4-4cd8-848b-398bb9338b24" containerID="8e9f68b22806c0d91023027c2f74867c11015aa6e629f98eba701236324fb716" exitCode=0 Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.580097 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8122-account-create-update-qdxp2" event={"ID":"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24","Type":"ContainerDied","Data":"8e9f68b22806c0d91023027c2f74867c11015aa6e629f98eba701236324fb716"} Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.594851 4982 generic.go:334] "Generic (PLEG): container finished" podID="5bbfefec-4285-4b91-a9d8-5dc47c0efb34" containerID="2005a7393e26c0e16e277225d7aa3a21d05deced5aba1019940b9c9376b287e6" exitCode=0 Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.594908 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmmx2" event={"ID":"5bbfefec-4285-4b91-a9d8-5dc47c0efb34","Type":"ContainerDied","Data":"2005a7393e26c0e16e277225d7aa3a21d05deced5aba1019940b9c9376b287e6"} Jan 23 08:15:48 crc kubenswrapper[4982]: I0123 08:15:48.923117 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4mtm7"] Jan 23 08:15:48 crc kubenswrapper[4982]: W0123 08:15:48.927424 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod435f47b7_cbb6_4f42_8a35_ea23c14e81df.slice/crio-3525d43817649f4be5a5bdd8c0d77a98a8ccc3abda3218711e788dfd53f2887a WatchSource:0}: Error finding container 3525d43817649f4be5a5bdd8c0d77a98a8ccc3abda3218711e788dfd53f2887a: Status 404 returned error can't find the container with id 3525d43817649f4be5a5bdd8c0d77a98a8ccc3abda3218711e788dfd53f2887a Jan 23 08:15:49 crc kubenswrapper[4982]: I0123 08:15:49.180534 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 08:15:49 crc kubenswrapper[4982]: I0123 08:15:49.604199 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4mtm7" event={"ID":"435f47b7-cbb6-4f42-8a35-ea23c14e81df","Type":"ContainerStarted","Data":"ebb645735179d0c992c61b57c2e7185ac4541a516e9c3b49608bc861dc751bc4"} Jan 23 08:15:49 crc kubenswrapper[4982]: I0123 08:15:49.604246 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4mtm7" event={"ID":"435f47b7-cbb6-4f42-8a35-ea23c14e81df","Type":"ContainerStarted","Data":"3525d43817649f4be5a5bdd8c0d77a98a8ccc3abda3218711e788dfd53f2887a"} Jan 23 08:15:49 crc kubenswrapper[4982]: I0123 08:15:49.622098 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4mtm7" podStartSLOduration=1.622075319 podStartE2EDuration="1.622075319s" podCreationTimestamp="2026-01-23 08:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:49.617395035 +0000 UTC m=+1224.097758421" watchObservedRunningTime="2026-01-23 08:15:49.622075319 +0000 UTC m=+1224.102438705" Jan 23 08:15:49 crc kubenswrapper[4982]: I0123 08:15:49.844539 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81181903-0754-473b-8900-d7398664ea21" path="/var/lib/kubelet/pods/81181903-0754-473b-8900-d7398664ea21/volumes" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.016442 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.023563 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.102837 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-operator-scripts\") pod \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.102886 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-operator-scripts\") pod \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.103009 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfl5s\" (UniqueName: \"kubernetes.io/projected/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-kube-api-access-gfl5s\") pod \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\" (UID: \"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24\") " Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.103168 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfmlt\" (UniqueName: \"kubernetes.io/projected/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-kube-api-access-kfmlt\") pod \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\" (UID: \"5bbfefec-4285-4b91-a9d8-5dc47c0efb34\") " Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.103837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d3dcf6f-a3f4-4cd8-848b-398bb9338b24" (UID: "3d3dcf6f-a3f4-4cd8-848b-398bb9338b24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.103872 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5bbfefec-4285-4b91-a9d8-5dc47c0efb34" (UID: "5bbfefec-4285-4b91-a9d8-5dc47c0efb34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.108617 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-kube-api-access-kfmlt" (OuterVolumeSpecName: "kube-api-access-kfmlt") pod "5bbfefec-4285-4b91-a9d8-5dc47c0efb34" (UID: "5bbfefec-4285-4b91-a9d8-5dc47c0efb34"). InnerVolumeSpecName "kube-api-access-kfmlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.109816 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-kube-api-access-gfl5s" (OuterVolumeSpecName: "kube-api-access-gfl5s") pod "3d3dcf6f-a3f4-4cd8-848b-398bb9338b24" (UID: "3d3dcf6f-a3f4-4cd8-848b-398bb9338b24"). InnerVolumeSpecName "kube-api-access-gfl5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.206034 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfmlt\" (UniqueName: \"kubernetes.io/projected/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-kube-api-access-kfmlt\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.206077 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.206087 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5bbfefec-4285-4b91-a9d8-5dc47c0efb34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.206095 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfl5s\" (UniqueName: \"kubernetes.io/projected/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24-kube-api-access-gfl5s\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.613666 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8122-account-create-update-qdxp2" event={"ID":"3d3dcf6f-a3f4-4cd8-848b-398bb9338b24","Type":"ContainerDied","Data":"510bf4eb065ada213b27208c5830671dc9df028fb9415d2e88d5801c26bfa650"} Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.613715 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="510bf4eb065ada213b27208c5830671dc9df028fb9415d2e88d5801c26bfa650" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.613751 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8122-account-create-update-qdxp2" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.618237 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gmmx2" event={"ID":"5bbfefec-4285-4b91-a9d8-5dc47c0efb34","Type":"ContainerDied","Data":"78679916d1ceadb094ace9606e9b8a1073a9e3ddac6fc4fb59e759560f2b3ff3"} Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.618313 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78679916d1ceadb094ace9606e9b8a1073a9e3ddac6fc4fb59e759560f2b3ff3" Jan 23 08:15:50 crc kubenswrapper[4982]: I0123 08:15:50.618525 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gmmx2" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.022299 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:15:51 crc kubenswrapper[4982]: E0123 08:15:51.022488 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:15:51 crc kubenswrapper[4982]: E0123 08:15:51.022770 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 08:15:51 crc kubenswrapper[4982]: E0123 08:15:51.022829 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift podName:5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1 nodeName:}" failed. No retries permitted until 2026-01-23 08:16:07.022810388 +0000 UTC m=+1241.503173774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift") pod "swift-storage-0" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1") : configmap "swift-ring-files" not found Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.555096 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-slkch"] Jan 23 08:15:51 crc kubenswrapper[4982]: E0123 08:15:51.555500 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbfefec-4285-4b91-a9d8-5dc47c0efb34" containerName="mariadb-database-create" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.555525 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbfefec-4285-4b91-a9d8-5dc47c0efb34" containerName="mariadb-database-create" Jan 23 08:15:51 crc kubenswrapper[4982]: E0123 08:15:51.555564 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d3dcf6f-a3f4-4cd8-848b-398bb9338b24" containerName="mariadb-account-create-update" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.555572 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d3dcf6f-a3f4-4cd8-848b-398bb9338b24" containerName="mariadb-account-create-update" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.555784 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bbfefec-4285-4b91-a9d8-5dc47c0efb34" containerName="mariadb-database-create" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.555829 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3dcf6f-a3f4-4cd8-848b-398bb9338b24" containerName="mariadb-account-create-update" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.556564 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.559095 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.559176 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cdkmd" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.565278 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-slkch"] Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.633669 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-combined-ca-bundle\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.634294 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4t6n\" (UniqueName: \"kubernetes.io/projected/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-kube-api-access-r4t6n\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.634434 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-db-sync-config-data\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.634525 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-config-data\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.736529 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4t6n\" (UniqueName: \"kubernetes.io/projected/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-kube-api-access-r4t6n\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.736601 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-db-sync-config-data\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.736620 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-config-data\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.736709 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-combined-ca-bundle\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.741386 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-db-sync-config-data\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.741857 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-config-data\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.743226 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-combined-ca-bundle\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.762994 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4t6n\" (UniqueName: \"kubernetes.io/projected/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-kube-api-access-r4t6n\") pod \"glance-db-sync-slkch\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " pod="openstack/glance-db-sync-slkch" Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.871944 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qdkmz" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" probeResult="failure" output=< Jan 23 08:15:51 crc kubenswrapper[4982]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 08:15:51 crc kubenswrapper[4982]: > Jan 23 08:15:51 crc kubenswrapper[4982]: I0123 08:15:51.875004 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slkch" Jan 23 08:15:52 crc kubenswrapper[4982]: I0123 08:15:52.430314 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-slkch"] Jan 23 08:15:52 crc kubenswrapper[4982]: W0123 08:15:52.433338 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c0742f3_a1af_45c5_aa3f_a7cf66d4796b.slice/crio-6872207c8b4d83ca5458b3c2fab6f8a05a1f91740dd74e7abf8ff5316a4138b7 WatchSource:0}: Error finding container 6872207c8b4d83ca5458b3c2fab6f8a05a1f91740dd74e7abf8ff5316a4138b7: Status 404 returned error can't find the container with id 6872207c8b4d83ca5458b3c2fab6f8a05a1f91740dd74e7abf8ff5316a4138b7 Jan 23 08:15:52 crc kubenswrapper[4982]: I0123 08:15:52.637447 4982 generic.go:334] "Generic (PLEG): container finished" podID="435f47b7-cbb6-4f42-8a35-ea23c14e81df" containerID="ebb645735179d0c992c61b57c2e7185ac4541a516e9c3b49608bc861dc751bc4" exitCode=0 Jan 23 08:15:52 crc kubenswrapper[4982]: I0123 08:15:52.637520 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4mtm7" event={"ID":"435f47b7-cbb6-4f42-8a35-ea23c14e81df","Type":"ContainerDied","Data":"ebb645735179d0c992c61b57c2e7185ac4541a516e9c3b49608bc861dc751bc4"} Jan 23 08:15:52 crc kubenswrapper[4982]: I0123 08:15:52.639035 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slkch" event={"ID":"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b","Type":"ContainerStarted","Data":"6872207c8b4d83ca5458b3c2fab6f8a05a1f91740dd74e7abf8ff5316a4138b7"} Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.088949 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.189219 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435f47b7-cbb6-4f42-8a35-ea23c14e81df-operator-scripts\") pod \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.189329 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n6lf\" (UniqueName: \"kubernetes.io/projected/435f47b7-cbb6-4f42-8a35-ea23c14e81df-kube-api-access-5n6lf\") pod \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\" (UID: \"435f47b7-cbb6-4f42-8a35-ea23c14e81df\") " Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.190534 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435f47b7-cbb6-4f42-8a35-ea23c14e81df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "435f47b7-cbb6-4f42-8a35-ea23c14e81df" (UID: "435f47b7-cbb6-4f42-8a35-ea23c14e81df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.199907 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435f47b7-cbb6-4f42-8a35-ea23c14e81df-kube-api-access-5n6lf" (OuterVolumeSpecName: "kube-api-access-5n6lf") pod "435f47b7-cbb6-4f42-8a35-ea23c14e81df" (UID: "435f47b7-cbb6-4f42-8a35-ea23c14e81df"). InnerVolumeSpecName "kube-api-access-5n6lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.292484 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435f47b7-cbb6-4f42-8a35-ea23c14e81df-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.292538 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n6lf\" (UniqueName: \"kubernetes.io/projected/435f47b7-cbb6-4f42-8a35-ea23c14e81df-kube-api-access-5n6lf\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.659159 4982 generic.go:334] "Generic (PLEG): container finished" podID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerID="0d2e98831fddc20f7bb33d42b529158a780bb8345579dad163a63f455e992e0d" exitCode=0 Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.659292 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b4b8957-4a36-4d8b-a591-fdcbe696c808","Type":"ContainerDied","Data":"0d2e98831fddc20f7bb33d42b529158a780bb8345579dad163a63f455e992e0d"} Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.662918 4982 generic.go:334] "Generic (PLEG): container finished" podID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerID="ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef" exitCode=0 Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.663034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26d23dde-8aac-481b-bfdb-ce3315166bc4","Type":"ContainerDied","Data":"ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef"} Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.669551 4982 generic.go:334] "Generic (PLEG): container finished" podID="4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" containerID="64e0007da8498c9c0f6c2a2120c194c4dd9b01a8bed0b734b2071d6d4123d58b" exitCode=0 Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.669710 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mqmqp" event={"ID":"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d","Type":"ContainerDied","Data":"64e0007da8498c9c0f6c2a2120c194c4dd9b01a8bed0b734b2071d6d4123d58b"} Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.676762 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4mtm7" event={"ID":"435f47b7-cbb6-4f42-8a35-ea23c14e81df","Type":"ContainerDied","Data":"3525d43817649f4be5a5bdd8c0d77a98a8ccc3abda3218711e788dfd53f2887a"} Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.676827 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4mtm7" Jan 23 08:15:54 crc kubenswrapper[4982]: I0123 08:15:54.676838 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3525d43817649f4be5a5bdd8c0d77a98a8ccc3abda3218711e788dfd53f2887a" Jan 23 08:15:55 crc kubenswrapper[4982]: I0123 08:15:55.690052 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b4b8957-4a36-4d8b-a591-fdcbe696c808","Type":"ContainerStarted","Data":"2a7e5d12bd1b08f1897cad70a1c21dbddaaabae2025c8884bde0a2e110eb49ed"} Jan 23 08:15:55 crc kubenswrapper[4982]: I0123 08:15:55.690856 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:15:55 crc kubenswrapper[4982]: I0123 08:15:55.696529 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26d23dde-8aac-481b-bfdb-ce3315166bc4","Type":"ContainerStarted","Data":"45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f"} Jan 23 08:15:55 crc kubenswrapper[4982]: I0123 08:15:55.696839 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 08:15:55 crc kubenswrapper[4982]: I0123 08:15:55.754916 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.832753996 podStartE2EDuration="1m19.754887075s" podCreationTimestamp="2026-01-23 08:14:36 +0000 UTC" firstStartedPulling="2026-01-23 08:14:38.539187922 +0000 UTC m=+1153.019551298" lastFinishedPulling="2026-01-23 08:15:19.461320991 +0000 UTC m=+1193.941684377" observedRunningTime="2026-01-23 08:15:55.725447824 +0000 UTC m=+1230.205811230" watchObservedRunningTime="2026-01-23 08:15:55.754887075 +0000 UTC m=+1230.235250461" Jan 23 08:15:55 crc kubenswrapper[4982]: I0123 08:15:55.755940 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.007467096 podStartE2EDuration="1m19.755934723s" podCreationTimestamp="2026-01-23 08:14:36 +0000 UTC" firstStartedPulling="2026-01-23 08:14:39.013723532 +0000 UTC m=+1153.494086918" lastFinishedPulling="2026-01-23 08:15:19.762191149 +0000 UTC m=+1194.242554545" observedRunningTime="2026-01-23 08:15:55.750382496 +0000 UTC m=+1230.230745902" watchObservedRunningTime="2026-01-23 08:15:55.755934723 +0000 UTC m=+1230.236298109" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.063972 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.138546 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-ring-data-devices\") pod \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.138698 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cldf9\" (UniqueName: \"kubernetes.io/projected/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-kube-api-access-cldf9\") pod \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.138739 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-dispersionconf\") pod \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.138884 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-combined-ca-bundle\") pod \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.138965 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-swiftconf\") pod \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.139090 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-scripts\") pod \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.139154 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-etc-swift\") pod \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\" (UID: \"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d\") " Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.139745 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" (UID: "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.140373 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" (UID: "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.145202 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-kube-api-access-cldf9" (OuterVolumeSpecName: "kube-api-access-cldf9") pod "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" (UID: "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d"). InnerVolumeSpecName "kube-api-access-cldf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.151812 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" (UID: "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.172009 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" (UID: "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.178718 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-scripts" (OuterVolumeSpecName: "scripts") pod "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" (UID: "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.187856 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" (UID: "4da6f6c3-ea0b-47d7-805c-c55b43c96c1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.241466 4982 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.241491 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.241502 4982 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.241511 4982 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.241525 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cldf9\" (UniqueName: \"kubernetes.io/projected/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-kube-api-access-cldf9\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.241533 4982 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.241541 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.708814 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-mqmqp" event={"ID":"4da6f6c3-ea0b-47d7-805c-c55b43c96c1d","Type":"ContainerDied","Data":"869a17742878d32c48f3118c54cbecd48a87a64b2382c9eaf2da601085fa6348"} Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.709179 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="869a17742878d32c48f3118c54cbecd48a87a64b2382c9eaf2da601085fa6348" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.708928 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-mqmqp" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.904221 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qdkmz" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" probeResult="failure" output=< Jan 23 08:15:56 crc kubenswrapper[4982]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 08:15:56 crc kubenswrapper[4982]: > Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.950844 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:15:56 crc kubenswrapper[4982]: I0123 08:15:56.984337 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.317034 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qdkmz-config-rb97x"] Jan 23 08:15:57 crc kubenswrapper[4982]: E0123 08:15:57.317436 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435f47b7-cbb6-4f42-8a35-ea23c14e81df" containerName="mariadb-account-create-update" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.317457 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="435f47b7-cbb6-4f42-8a35-ea23c14e81df" containerName="mariadb-account-create-update" Jan 23 08:15:57 crc kubenswrapper[4982]: E0123 08:15:57.317484 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" containerName="swift-ring-rebalance" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.317491 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" containerName="swift-ring-rebalance" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.317688 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" containerName="swift-ring-rebalance" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.317724 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="435f47b7-cbb6-4f42-8a35-ea23c14e81df" containerName="mariadb-account-create-update" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.318333 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.323807 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.337367 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qdkmz-config-rb97x"] Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.373730 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run-ovn\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.373951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.373992 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-log-ovn\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.374022 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjfzv\" (UniqueName: \"kubernetes.io/projected/7b83822e-4ab4-4df4-b147-f1a854326048-kube-api-access-rjfzv\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.374047 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-additional-scripts\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.374091 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-scripts\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.475799 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run-ovn\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.475969 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.476009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-log-ovn\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.476048 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjfzv\" (UniqueName: \"kubernetes.io/projected/7b83822e-4ab4-4df4-b147-f1a854326048-kube-api-access-rjfzv\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.476083 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-additional-scripts\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.476138 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-scripts\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.476210 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run-ovn\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.476261 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-log-ovn\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.476322 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.477811 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-additional-scripts\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.478433 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-scripts\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.505921 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjfzv\" (UniqueName: \"kubernetes.io/projected/7b83822e-4ab4-4df4-b147-f1a854326048-kube-api-access-rjfzv\") pod \"ovn-controller-qdkmz-config-rb97x\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:57 crc kubenswrapper[4982]: I0123 08:15:57.639531 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:15:58 crc kubenswrapper[4982]: I0123 08:15:58.157273 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qdkmz-config-rb97x"] Jan 23 08:15:58 crc kubenswrapper[4982]: I0123 08:15:58.734248 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-rb97x" event={"ID":"7b83822e-4ab4-4df4-b147-f1a854326048","Type":"ContainerStarted","Data":"29eb3a543d5ebdca36d0fd83d091f9af6c5e24ff9135abc626ab6043bf685fa1"} Jan 23 08:15:58 crc kubenswrapper[4982]: I0123 08:15:58.734586 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-rb97x" event={"ID":"7b83822e-4ab4-4df4-b147-f1a854326048","Type":"ContainerStarted","Data":"246da9c8b8ad5dfde00779c9bbc50cedb0ecd4c808bfbea0e156cdfbee4df742"} Jan 23 08:15:58 crc kubenswrapper[4982]: I0123 08:15:58.758698 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qdkmz-config-rb97x" podStartSLOduration=1.758673678 podStartE2EDuration="1.758673678s" podCreationTimestamp="2026-01-23 08:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:15:58.757016354 +0000 UTC m=+1233.237379730" watchObservedRunningTime="2026-01-23 08:15:58.758673678 +0000 UTC m=+1233.239037084" Jan 23 08:15:59 crc kubenswrapper[4982]: I0123 08:15:59.468212 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4mtm7"] Jan 23 08:15:59 crc kubenswrapper[4982]: I0123 08:15:59.499117 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4mtm7"] Jan 23 08:15:59 crc kubenswrapper[4982]: I0123 08:15:59.745284 4982 generic.go:334] "Generic (PLEG): container finished" podID="7b83822e-4ab4-4df4-b147-f1a854326048" containerID="29eb3a543d5ebdca36d0fd83d091f9af6c5e24ff9135abc626ab6043bf685fa1" exitCode=0 Jan 23 08:15:59 crc kubenswrapper[4982]: I0123 08:15:59.745341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-rb97x" event={"ID":"7b83822e-4ab4-4df4-b147-f1a854326048","Type":"ContainerDied","Data":"29eb3a543d5ebdca36d0fd83d091f9af6c5e24ff9135abc626ab6043bf685fa1"} Jan 23 08:15:59 crc kubenswrapper[4982]: I0123 08:15:59.838200 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435f47b7-cbb6-4f42-8a35-ea23c14e81df" path="/var/lib/kubelet/pods/435f47b7-cbb6-4f42-8a35-ea23c14e81df/volumes" Jan 23 08:16:01 crc kubenswrapper[4982]: I0123 08:16:01.882650 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qdkmz" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.513747 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l8fs8"] Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.516024 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.519046 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.527069 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l8fs8"] Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.566190 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95438dd4-92f1-4e6f-bff5-2b3471920e2f-operator-scripts\") pod \"root-account-create-update-l8fs8\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.566279 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwb5\" (UniqueName: \"kubernetes.io/projected/95438dd4-92f1-4e6f-bff5-2b3471920e2f-kube-api-access-9lwb5\") pod \"root-account-create-update-l8fs8\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.667739 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95438dd4-92f1-4e6f-bff5-2b3471920e2f-operator-scripts\") pod \"root-account-create-update-l8fs8\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.667816 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwb5\" (UniqueName: \"kubernetes.io/projected/95438dd4-92f1-4e6f-bff5-2b3471920e2f-kube-api-access-9lwb5\") pod \"root-account-create-update-l8fs8\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.669647 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95438dd4-92f1-4e6f-bff5-2b3471920e2f-operator-scripts\") pod \"root-account-create-update-l8fs8\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.690271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwb5\" (UniqueName: \"kubernetes.io/projected/95438dd4-92f1-4e6f-bff5-2b3471920e2f-kube-api-access-9lwb5\") pod \"root-account-create-update-l8fs8\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:04 crc kubenswrapper[4982]: I0123 08:16:04.837975 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.113514 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.122020 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"swift-storage-0\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " pod="openstack/swift-storage-0" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.198553 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.759335 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.846477 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-rb97x" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.879862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-rb97x" event={"ID":"7b83822e-4ab4-4df4-b147-f1a854326048","Type":"ContainerDied","Data":"246da9c8b8ad5dfde00779c9bbc50cedb0ecd4c808bfbea0e156cdfbee4df742"} Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.879919 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246da9c8b8ad5dfde00779c9bbc50cedb0ecd4c808bfbea0e156cdfbee4df742" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.923746 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.927889 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run-ovn\") pod \"7b83822e-4ab4-4df4-b147-f1a854326048\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928022 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run\") pod \"7b83822e-4ab4-4df4-b147-f1a854326048\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928013 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7b83822e-4ab4-4df4-b147-f1a854326048" (UID: "7b83822e-4ab4-4df4-b147-f1a854326048"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928064 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjfzv\" (UniqueName: \"kubernetes.io/projected/7b83822e-4ab4-4df4-b147-f1a854326048-kube-api-access-rjfzv\") pod \"7b83822e-4ab4-4df4-b147-f1a854326048\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928128 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run" (OuterVolumeSpecName: "var-run") pod "7b83822e-4ab4-4df4-b147-f1a854326048" (UID: "7b83822e-4ab4-4df4-b147-f1a854326048"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928169 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-scripts\") pod \"7b83822e-4ab4-4df4-b147-f1a854326048\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928262 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-additional-scripts\") pod \"7b83822e-4ab4-4df4-b147-f1a854326048\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928309 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-log-ovn\") pod \"7b83822e-4ab4-4df4-b147-f1a854326048\" (UID: \"7b83822e-4ab4-4df4-b147-f1a854326048\") " Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928937 4982 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.928963 4982 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.930060 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7b83822e-4ab4-4df4-b147-f1a854326048" (UID: "7b83822e-4ab4-4df4-b147-f1a854326048"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.930508 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7b83822e-4ab4-4df4-b147-f1a854326048" (UID: "7b83822e-4ab4-4df4-b147-f1a854326048"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.930894 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-scripts" (OuterVolumeSpecName: "scripts") pod "7b83822e-4ab4-4df4-b147-f1a854326048" (UID: "7b83822e-4ab4-4df4-b147-f1a854326048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:07 crc kubenswrapper[4982]: I0123 08:16:07.938996 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b83822e-4ab4-4df4-b147-f1a854326048-kube-api-access-rjfzv" (OuterVolumeSpecName: "kube-api-access-rjfzv") pod "7b83822e-4ab4-4df4-b147-f1a854326048" (UID: "7b83822e-4ab4-4df4-b147-f1a854326048"). InnerVolumeSpecName "kube-api-access-rjfzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.032570 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjfzv\" (UniqueName: \"kubernetes.io/projected/7b83822e-4ab4-4df4-b147-f1a854326048-kube-api-access-rjfzv\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.032639 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.032656 4982 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7b83822e-4ab4-4df4-b147-f1a854326048-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.032668 4982 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7b83822e-4ab4-4df4-b147-f1a854326048-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.193467 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l8fs8"] Jan 23 08:16:08 crc kubenswrapper[4982]: W0123 08:16:08.200029 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95438dd4_92f1_4e6f_bff5_2b3471920e2f.slice/crio-c28863b0e0533df4dc1f9c5f714de39ca697d6deb56da08a06956d5154699caa WatchSource:0}: Error finding container c28863b0e0533df4dc1f9c5f714de39ca697d6deb56da08a06956d5154699caa: Status 404 returned error can't find the container with id c28863b0e0533df4dc1f9c5f714de39ca697d6deb56da08a06956d5154699caa Jan 23 08:16:08 crc kubenswrapper[4982]: E0123 08:16:08.249020 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b83822e_4ab4_4df4_b147_f1a854326048.slice\": RecentStats: unable to find data in memory cache]" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.322803 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.399892 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 08:16:08 crc kubenswrapper[4982]: W0123 08:16:08.420248 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f1d3fcb_23d2_4312_8ba0_3c25cf3fb1d1.slice/crio-de9202c698b3916fb1d2512866e4d9733518738e6f27d5f5fa8a3887d9d5db6f WatchSource:0}: Error finding container de9202c698b3916fb1d2512866e4d9733518738e6f27d5f5fa8a3887d9d5db6f: Status 404 returned error can't find the container with id de9202c698b3916fb1d2512866e4d9733518738e6f27d5f5fa8a3887d9d5db6f Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.859975 4982 generic.go:334] "Generic (PLEG): container finished" podID="95438dd4-92f1-4e6f-bff5-2b3471920e2f" containerID="92cc8670b02207b8d76731a04ece2d09171e94fc63450601eeb112b6ff6b02d2" exitCode=0 Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.860286 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l8fs8" event={"ID":"95438dd4-92f1-4e6f-bff5-2b3471920e2f","Type":"ContainerDied","Data":"92cc8670b02207b8d76731a04ece2d09171e94fc63450601eeb112b6ff6b02d2"} Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.860429 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l8fs8" event={"ID":"95438dd4-92f1-4e6f-bff5-2b3471920e2f","Type":"ContainerStarted","Data":"c28863b0e0533df4dc1f9c5f714de39ca697d6deb56da08a06956d5154699caa"} Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.862768 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slkch" event={"ID":"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b","Type":"ContainerStarted","Data":"b457d4fe1ca2f0858e1e5fa829f17855220a24e0cd7893a3da815b99c94cffb6"} Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.864464 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"de9202c698b3916fb1d2512866e4d9733518738e6f27d5f5fa8a3887d9d5db6f"} Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.905737 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-slkch" podStartSLOduration=2.562067574 podStartE2EDuration="17.905708997s" podCreationTimestamp="2026-01-23 08:15:51 +0000 UTC" firstStartedPulling="2026-01-23 08:15:52.436152942 +0000 UTC m=+1226.916516328" lastFinishedPulling="2026-01-23 08:16:07.779794365 +0000 UTC m=+1242.260157751" observedRunningTime="2026-01-23 08:16:08.89449843 +0000 UTC m=+1243.374861836" watchObservedRunningTime="2026-01-23 08:16:08.905708997 +0000 UTC m=+1243.386072383" Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.915998 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qdkmz-config-rb97x"] Jan 23 08:16:08 crc kubenswrapper[4982]: I0123 08:16:08.924703 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qdkmz-config-rb97x"] Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.053077 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qdkmz-config-7rvz2"] Jan 23 08:16:09 crc kubenswrapper[4982]: E0123 08:16:09.053801 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b83822e-4ab4-4df4-b147-f1a854326048" containerName="ovn-config" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.053875 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b83822e-4ab4-4df4-b147-f1a854326048" containerName="ovn-config" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.054134 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b83822e-4ab4-4df4-b147-f1a854326048" containerName="ovn-config" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.054774 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.058700 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.078210 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qdkmz-config-7rvz2"] Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.152920 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.152985 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk8bk\" (UniqueName: \"kubernetes.io/projected/06956f84-5a67-450c-a3b9-b310d8938510-kube-api-access-nk8bk\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.153025 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run-ovn\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.153071 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-scripts\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.153098 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-log-ovn\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.153486 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-additional-scripts\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256043 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-additional-scripts\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256222 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk8bk\" (UniqueName: \"kubernetes.io/projected/06956f84-5a67-450c-a3b9-b310d8938510-kube-api-access-nk8bk\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256247 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run-ovn\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256281 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-scripts\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-log-ovn\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256754 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-log-ovn\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256784 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run-ovn\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.256815 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.257005 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-additional-scripts\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.259074 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-scripts\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.289290 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk8bk\" (UniqueName: \"kubernetes.io/projected/06956f84-5a67-450c-a3b9-b310d8938510-kube-api-access-nk8bk\") pod \"ovn-controller-qdkmz-config-7rvz2\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.378051 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.845991 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b83822e-4ab4-4df4-b147-f1a854326048" path="/var/lib/kubelet/pods/7b83822e-4ab4-4df4-b147-f1a854326048/volumes" Jan 23 08:16:09 crc kubenswrapper[4982]: I0123 08:16:09.938288 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qdkmz-config-7rvz2"] Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.543816 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.697429 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95438dd4-92f1-4e6f-bff5-2b3471920e2f-operator-scripts\") pod \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.697751 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lwb5\" (UniqueName: \"kubernetes.io/projected/95438dd4-92f1-4e6f-bff5-2b3471920e2f-kube-api-access-9lwb5\") pod \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\" (UID: \"95438dd4-92f1-4e6f-bff5-2b3471920e2f\") " Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.698110 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95438dd4-92f1-4e6f-bff5-2b3471920e2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95438dd4-92f1-4e6f-bff5-2b3471920e2f" (UID: "95438dd4-92f1-4e6f-bff5-2b3471920e2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.698430 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95438dd4-92f1-4e6f-bff5-2b3471920e2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.703920 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95438dd4-92f1-4e6f-bff5-2b3471920e2f-kube-api-access-9lwb5" (OuterVolumeSpecName: "kube-api-access-9lwb5") pod "95438dd4-92f1-4e6f-bff5-2b3471920e2f" (UID: "95438dd4-92f1-4e6f-bff5-2b3471920e2f"). InnerVolumeSpecName "kube-api-access-9lwb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.800323 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lwb5\" (UniqueName: \"kubernetes.io/projected/95438dd4-92f1-4e6f-bff5-2b3471920e2f-kube-api-access-9lwb5\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.895432 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-7rvz2" event={"ID":"06956f84-5a67-450c-a3b9-b310d8938510","Type":"ContainerStarted","Data":"6e9f536519bada71797b86bc7eafccf067977908d806bf8ea7d4cbd0aeb4e766"} Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.895504 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-7rvz2" event={"ID":"06956f84-5a67-450c-a3b9-b310d8938510","Type":"ContainerStarted","Data":"535a9400d5e4568f4e6f19a37a79e570dff5478082ffb6e39043604d1d407f9f"} Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.897127 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l8fs8" event={"ID":"95438dd4-92f1-4e6f-bff5-2b3471920e2f","Type":"ContainerDied","Data":"c28863b0e0533df4dc1f9c5f714de39ca697d6deb56da08a06956d5154699caa"} Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.897179 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8fs8" Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.897193 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28863b0e0533df4dc1f9c5f714de39ca697d6deb56da08a06956d5154699caa" Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.899178 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290"} Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.899211 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed"} Jan 23 08:16:10 crc kubenswrapper[4982]: I0123 08:16:10.923264 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qdkmz-config-7rvz2" podStartSLOduration=1.9232421309999999 podStartE2EDuration="1.923242131s" podCreationTimestamp="2026-01-23 08:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:10.916392959 +0000 UTC m=+1245.396756345" watchObservedRunningTime="2026-01-23 08:16:10.923242131 +0000 UTC m=+1245.403605517" Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.436053 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.436111 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.436148 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.436901 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c77a76c13f380fffd54942065cc632450e7687109faf0320bb3cc074932836d4"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.436955 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://c77a76c13f380fffd54942065cc632450e7687109faf0320bb3cc074932836d4" gracePeriod=600 Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.911746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e"} Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.912117 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440"} Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.915137 4982 generic.go:334] "Generic (PLEG): container finished" podID="06956f84-5a67-450c-a3b9-b310d8938510" containerID="6e9f536519bada71797b86bc7eafccf067977908d806bf8ea7d4cbd0aeb4e766" exitCode=0 Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.915742 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-7rvz2" event={"ID":"06956f84-5a67-450c-a3b9-b310d8938510","Type":"ContainerDied","Data":"6e9f536519bada71797b86bc7eafccf067977908d806bf8ea7d4cbd0aeb4e766"} Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.920099 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="c77a76c13f380fffd54942065cc632450e7687109faf0320bb3cc074932836d4" exitCode=0 Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.920145 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"c77a76c13f380fffd54942065cc632450e7687109faf0320bb3cc074932836d4"} Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.920173 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"4ff1d104ed341faafc779f69e2fa548cf764b2dac0f08d4eb5491e25e74751a0"} Jan 23 08:16:11 crc kubenswrapper[4982]: I0123 08:16:11.920194 4982 scope.go:117] "RemoveContainer" containerID="1d24a630527fc6e0922fa5b0c623c16897e3532c564083acc123c1c83ab59cae" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.190156 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.302790 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run-ovn\") pod \"06956f84-5a67-450c-a3b9-b310d8938510\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.302826 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-log-ovn\") pod \"06956f84-5a67-450c-a3b9-b310d8938510\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303021 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "06956f84-5a67-450c-a3b9-b310d8938510" (UID: "06956f84-5a67-450c-a3b9-b310d8938510"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303088 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "06956f84-5a67-450c-a3b9-b310d8938510" (UID: "06956f84-5a67-450c-a3b9-b310d8938510"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303205 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-additional-scripts\") pod \"06956f84-5a67-450c-a3b9-b310d8938510\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303261 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk8bk\" (UniqueName: \"kubernetes.io/projected/06956f84-5a67-450c-a3b9-b310d8938510-kube-api-access-nk8bk\") pod \"06956f84-5a67-450c-a3b9-b310d8938510\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303321 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run\") pod \"06956f84-5a67-450c-a3b9-b310d8938510\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303356 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-scripts\") pod \"06956f84-5a67-450c-a3b9-b310d8938510\" (UID: \"06956f84-5a67-450c-a3b9-b310d8938510\") " Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303482 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run" (OuterVolumeSpecName: "var-run") pod "06956f84-5a67-450c-a3b9-b310d8938510" (UID: "06956f84-5a67-450c-a3b9-b310d8938510"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303815 4982 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303837 4982 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.303850 4982 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06956f84-5a67-450c-a3b9-b310d8938510-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.304242 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "06956f84-5a67-450c-a3b9-b310d8938510" (UID: "06956f84-5a67-450c-a3b9-b310d8938510"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.305101 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-scripts" (OuterVolumeSpecName: "scripts") pod "06956f84-5a67-450c-a3b9-b310d8938510" (UID: "06956f84-5a67-450c-a3b9-b310d8938510"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.307297 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06956f84-5a67-450c-a3b9-b310d8938510-kube-api-access-nk8bk" (OuterVolumeSpecName: "kube-api-access-nk8bk") pod "06956f84-5a67-450c-a3b9-b310d8938510" (UID: "06956f84-5a67-450c-a3b9-b310d8938510"). InnerVolumeSpecName "kube-api-access-nk8bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.405249 4982 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.405617 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk8bk\" (UniqueName: \"kubernetes.io/projected/06956f84-5a67-450c-a3b9-b310d8938510-kube-api-access-nk8bk\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:13 crc kubenswrapper[4982]: I0123 08:16:13.405660 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06956f84-5a67-450c-a3b9-b310d8938510-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.004889 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz-config-7rvz2" event={"ID":"06956f84-5a67-450c-a3b9-b310d8938510","Type":"ContainerDied","Data":"535a9400d5e4568f4e6f19a37a79e570dff5478082ffb6e39043604d1d407f9f"} Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.005729 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535a9400d5e4568f4e6f19a37a79e570dff5478082ffb6e39043604d1d407f9f" Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.006300 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz-config-7rvz2" Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.029836 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2"} Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.029899 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505"} Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.029916 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05"} Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.029929 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336"} Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.050440 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qdkmz-config-7rvz2"] Jan 23 08:16:14 crc kubenswrapper[4982]: I0123 08:16:14.058864 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qdkmz-config-7rvz2"] Jan 23 08:16:15 crc kubenswrapper[4982]: I0123 08:16:15.048598 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7"} Jan 23 08:16:15 crc kubenswrapper[4982]: I0123 08:16:15.840616 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06956f84-5a67-450c-a3b9-b310d8938510" path="/var/lib/kubelet/pods/06956f84-5a67-450c-a3b9-b310d8938510/volumes" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.068575 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1"} Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.068667 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564"} Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.068682 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973"} Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.068694 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653"} Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.068704 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86"} Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.068716 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerStarted","Data":"f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1"} Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.491440 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.243432416 podStartE2EDuration="43.491411566s" podCreationTimestamp="2026-01-23 08:15:33 +0000 UTC" firstStartedPulling="2026-01-23 08:16:08.423669797 +0000 UTC m=+1242.904033183" lastFinishedPulling="2026-01-23 08:16:14.671648947 +0000 UTC m=+1249.152012333" observedRunningTime="2026-01-23 08:16:16.114068431 +0000 UTC m=+1250.594431827" watchObservedRunningTime="2026-01-23 08:16:16.491411566 +0000 UTC m=+1250.971774952" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.501879 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-796m8"] Jan 23 08:16:16 crc kubenswrapper[4982]: E0123 08:16:16.502405 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95438dd4-92f1-4e6f-bff5-2b3471920e2f" containerName="mariadb-account-create-update" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.502427 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="95438dd4-92f1-4e6f-bff5-2b3471920e2f" containerName="mariadb-account-create-update" Jan 23 08:16:16 crc kubenswrapper[4982]: E0123 08:16:16.502438 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06956f84-5a67-450c-a3b9-b310d8938510" containerName="ovn-config" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.502445 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="06956f84-5a67-450c-a3b9-b310d8938510" containerName="ovn-config" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.502691 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="06956f84-5a67-450c-a3b9-b310d8938510" containerName="ovn-config" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.502716 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="95438dd4-92f1-4e6f-bff5-2b3471920e2f" containerName="mariadb-account-create-update" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.503873 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.510575 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.532566 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-796m8"] Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.667368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.668117 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.668235 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-svc\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.668310 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.668489 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-config\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.668599 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl87b\" (UniqueName: \"kubernetes.io/projected/b4de881e-111c-4670-bc32-0c647e9ca69e-kube-api-access-kl87b\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.771233 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-config\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.771303 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl87b\" (UniqueName: \"kubernetes.io/projected/b4de881e-111c-4670-bc32-0c647e9ca69e-kube-api-access-kl87b\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.771395 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.771459 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.771482 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-svc\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.771496 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.772514 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-config\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.772619 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.772758 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-svc\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.772857 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.773803 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.794102 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl87b\" (UniqueName: \"kubernetes.io/projected/b4de881e-111c-4670-bc32-0c647e9ca69e-kube-api-access-kl87b\") pod \"dnsmasq-dns-8db84466c-796m8\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:16 crc kubenswrapper[4982]: I0123 08:16:16.828453 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:17 crc kubenswrapper[4982]: I0123 08:16:17.123110 4982 generic.go:334] "Generic (PLEG): container finished" podID="3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" containerID="b457d4fe1ca2f0858e1e5fa829f17855220a24e0cd7893a3da815b99c94cffb6" exitCode=0 Jan 23 08:16:17 crc kubenswrapper[4982]: I0123 08:16:17.125208 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slkch" event={"ID":"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b","Type":"ContainerDied","Data":"b457d4fe1ca2f0858e1e5fa829f17855220a24e0cd7893a3da815b99c94cffb6"} Jan 23 08:16:17 crc kubenswrapper[4982]: I0123 08:16:17.293840 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-796m8"] Jan 23 08:16:17 crc kubenswrapper[4982]: I0123 08:16:17.919044 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.133741 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerID="123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84" exitCode=0 Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.133906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-796m8" event={"ID":"b4de881e-111c-4670-bc32-0c647e9ca69e","Type":"ContainerDied","Data":"123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84"} Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.133933 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-796m8" event={"ID":"b4de881e-111c-4670-bc32-0c647e9ca69e","Type":"ContainerStarted","Data":"b5bcc1d0977083fb728a8609238f20b6a271b9dc623b505e28d43373d8eb1fd8"} Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.295806 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.636649 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slkch" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.726838 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-config-data\") pod \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.726962 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-combined-ca-bundle\") pod \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.727055 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4t6n\" (UniqueName: \"kubernetes.io/projected/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-kube-api-access-r4t6n\") pod \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.727155 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-db-sync-config-data\") pod \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\" (UID: \"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b\") " Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.732138 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" (UID: "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.734557 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-kube-api-access-r4t6n" (OuterVolumeSpecName: "kube-api-access-r4t6n") pod "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" (UID: "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b"). InnerVolumeSpecName "kube-api-access-r4t6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.755265 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" (UID: "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.777117 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-config-data" (OuterVolumeSpecName: "config-data") pod "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" (UID: "3c0742f3-a1af-45c5-aa3f-a7cf66d4796b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.829034 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4t6n\" (UniqueName: \"kubernetes.io/projected/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-kube-api-access-r4t6n\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.829072 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.829081 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:18 crc kubenswrapper[4982]: I0123 08:16:18.829094 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.149135 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-slkch" event={"ID":"3c0742f3-a1af-45c5-aa3f-a7cf66d4796b","Type":"ContainerDied","Data":"6872207c8b4d83ca5458b3c2fab6f8a05a1f91740dd74e7abf8ff5316a4138b7"} Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.149455 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6872207c8b4d83ca5458b3c2fab6f8a05a1f91740dd74e7abf8ff5316a4138b7" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.149521 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-slkch" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.154263 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-796m8" event={"ID":"b4de881e-111c-4670-bc32-0c647e9ca69e","Type":"ContainerStarted","Data":"9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126"} Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.154492 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.195638 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-796m8" podStartSLOduration=3.195592728 podStartE2EDuration="3.195592728s" podCreationTimestamp="2026-01-23 08:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:19.174296383 +0000 UTC m=+1253.654659769" watchObservedRunningTime="2026-01-23 08:16:19.195592728 +0000 UTC m=+1253.675956114" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.646374 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-796m8"] Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.683749 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cgszc"] Jan 23 08:16:19 crc kubenswrapper[4982]: E0123 08:16:19.686942 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" containerName="glance-db-sync" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.686970 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" containerName="glance-db-sync" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.687177 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" containerName="glance-db-sync" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.688338 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.702592 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cgszc"] Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.848270 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8qrf\" (UniqueName: \"kubernetes.io/projected/882ab76e-9dbd-497e-83a1-62f571bb0426-kube-api-access-q8qrf\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.848455 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-config\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.848494 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.848587 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.848673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.848759 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.950819 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-config\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.951054 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.951212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.951362 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.951502 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.951680 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8qrf\" (UniqueName: \"kubernetes.io/projected/882ab76e-9dbd-497e-83a1-62f571bb0426-kube-api-access-q8qrf\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.952164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-config\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.953679 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-sb\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.956699 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-swift-storage-0\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.958178 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-svc\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.959613 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-nb\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:19 crc kubenswrapper[4982]: I0123 08:16:19.982269 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8qrf\" (UniqueName: \"kubernetes.io/projected/882ab76e-9dbd-497e-83a1-62f571bb0426-kube-api-access-q8qrf\") pod \"dnsmasq-dns-74dfc89d77-cgszc\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.012484 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.057461 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qdxbw"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.059048 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.095843 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qdxbw"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.158518 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mmgg\" (UniqueName: \"kubernetes.io/projected/dc972f61-262c-4435-b54a-5705e0c00c8c-kube-api-access-7mmgg\") pod \"cinder-db-create-qdxbw\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.158944 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc972f61-262c-4435-b54a-5705e0c00c8c-operator-scripts\") pod \"cinder-db-create-qdxbw\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.173514 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-r4ns4"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.174949 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.206925 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2642-account-create-update-29gw9"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.223793 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.229463 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r4ns4"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.229954 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.260482 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2642-account-create-update-29gw9"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.262297 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc972f61-262c-4435-b54a-5705e0c00c8c-operator-scripts\") pod \"cinder-db-create-qdxbw\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.262395 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsd77\" (UniqueName: \"kubernetes.io/projected/3a32c4a5-531d-4ce6-a01e-eb3889d85751-kube-api-access-nsd77\") pod \"barbican-db-create-r4ns4\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.262484 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mmgg\" (UniqueName: \"kubernetes.io/projected/dc972f61-262c-4435-b54a-5705e0c00c8c-kube-api-access-7mmgg\") pod \"cinder-db-create-qdxbw\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.262512 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a32c4a5-531d-4ce6-a01e-eb3889d85751-operator-scripts\") pod \"barbican-db-create-r4ns4\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.263641 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc972f61-262c-4435-b54a-5705e0c00c8c-operator-scripts\") pod \"cinder-db-create-qdxbw\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.299935 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mmgg\" (UniqueName: \"kubernetes.io/projected/dc972f61-262c-4435-b54a-5705e0c00c8c-kube-api-access-7mmgg\") pod \"cinder-db-create-qdxbw\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.299998 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1720-account-create-update-5xfq9"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.301134 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.306983 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.308142 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1720-account-create-update-5xfq9"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.364791 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnl4\" (UniqueName: \"kubernetes.io/projected/5df7e23c-1217-4b12-b821-ddd7de9bbf97-kube-api-access-4mnl4\") pod \"cinder-2642-account-create-update-29gw9\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.364832 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a32c4a5-531d-4ce6-a01e-eb3889d85751-operator-scripts\") pod \"barbican-db-create-r4ns4\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.364877 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df7e23c-1217-4b12-b821-ddd7de9bbf97-operator-scripts\") pod \"cinder-2642-account-create-update-29gw9\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.364900 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6v8r\" (UniqueName: \"kubernetes.io/projected/2f789991-fa7a-45f6-a59e-9148eef61221-kube-api-access-g6v8r\") pod \"barbican-1720-account-create-update-5xfq9\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.364960 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f789991-fa7a-45f6-a59e-9148eef61221-operator-scripts\") pod \"barbican-1720-account-create-update-5xfq9\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.364987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsd77\" (UniqueName: \"kubernetes.io/projected/3a32c4a5-531d-4ce6-a01e-eb3889d85751-kube-api-access-nsd77\") pod \"barbican-db-create-r4ns4\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.366170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a32c4a5-531d-4ce6-a01e-eb3889d85751-operator-scripts\") pod \"barbican-db-create-r4ns4\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.386199 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsd77\" (UniqueName: \"kubernetes.io/projected/3a32c4a5-531d-4ce6-a01e-eb3889d85751-kube-api-access-nsd77\") pod \"barbican-db-create-r4ns4\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.453552 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.468367 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f789991-fa7a-45f6-a59e-9148eef61221-operator-scripts\") pod \"barbican-1720-account-create-update-5xfq9\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.468484 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnl4\" (UniqueName: \"kubernetes.io/projected/5df7e23c-1217-4b12-b821-ddd7de9bbf97-kube-api-access-4mnl4\") pod \"cinder-2642-account-create-update-29gw9\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.468536 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df7e23c-1217-4b12-b821-ddd7de9bbf97-operator-scripts\") pod \"cinder-2642-account-create-update-29gw9\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.468562 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6v8r\" (UniqueName: \"kubernetes.io/projected/2f789991-fa7a-45f6-a59e-9148eef61221-kube-api-access-g6v8r\") pod \"barbican-1720-account-create-update-5xfq9\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.469915 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f789991-fa7a-45f6-a59e-9148eef61221-operator-scripts\") pod \"barbican-1720-account-create-update-5xfq9\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.470542 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df7e23c-1217-4b12-b821-ddd7de9bbf97-operator-scripts\") pod \"cinder-2642-account-create-update-29gw9\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.493705 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6v8r\" (UniqueName: \"kubernetes.io/projected/2f789991-fa7a-45f6-a59e-9148eef61221-kube-api-access-g6v8r\") pod \"barbican-1720-account-create-update-5xfq9\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.499470 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnl4\" (UniqueName: \"kubernetes.io/projected/5df7e23c-1217-4b12-b821-ddd7de9bbf97-kube-api-access-4mnl4\") pod \"cinder-2642-account-create-update-29gw9\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.499519 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4xcgz"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.501344 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.503672 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zhwks" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.504676 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.505045 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.505503 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.508606 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.557703 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4xcgz"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.558175 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.570968 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbfgt\" (UniqueName: \"kubernetes.io/projected/2714e6c8-2065-4423-8249-a5b382970a81-kube-api-access-sbfgt\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.571084 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-config-data\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.571133 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-combined-ca-bundle\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.590069 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7rttj"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.591538 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.600773 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bf3c-account-create-update-5lf2k"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.602196 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.608404 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.613049 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7rttj"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.629110 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bf3c-account-create-update-5lf2k"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.672402 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmhrt\" (UniqueName: \"kubernetes.io/projected/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-kube-api-access-fmhrt\") pod \"neutron-db-create-7rttj\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.672475 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbfgt\" (UniqueName: \"kubernetes.io/projected/2714e6c8-2065-4423-8249-a5b382970a81-kube-api-access-sbfgt\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.672497 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bp8j\" (UniqueName: \"kubernetes.io/projected/337e4f54-165a-4b51-90f9-32c8599bc1e3-kube-api-access-7bp8j\") pod \"neutron-bf3c-account-create-update-5lf2k\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.672570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-operator-scripts\") pod \"neutron-db-create-7rttj\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.672594 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/337e4f54-165a-4b51-90f9-32c8599bc1e3-operator-scripts\") pod \"neutron-bf3c-account-create-update-5lf2k\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.672686 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-config-data\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.672732 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-combined-ca-bundle\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.674087 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.724210 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbfgt\" (UniqueName: \"kubernetes.io/projected/2714e6c8-2065-4423-8249-a5b382970a81-kube-api-access-sbfgt\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.725460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-combined-ca-bundle\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.729441 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-config-data\") pod \"keystone-db-sync-4xcgz\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.774473 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmhrt\" (UniqueName: \"kubernetes.io/projected/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-kube-api-access-fmhrt\") pod \"neutron-db-create-7rttj\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.775848 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bp8j\" (UniqueName: \"kubernetes.io/projected/337e4f54-165a-4b51-90f9-32c8599bc1e3-kube-api-access-7bp8j\") pod \"neutron-bf3c-account-create-update-5lf2k\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.775970 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-operator-scripts\") pod \"neutron-db-create-7rttj\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.776004 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/337e4f54-165a-4b51-90f9-32c8599bc1e3-operator-scripts\") pod \"neutron-bf3c-account-create-update-5lf2k\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.776873 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/337e4f54-165a-4b51-90f9-32c8599bc1e3-operator-scripts\") pod \"neutron-bf3c-account-create-update-5lf2k\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.777086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-operator-scripts\") pod \"neutron-db-create-7rttj\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.790225 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cgszc"] Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.801236 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmhrt\" (UniqueName: \"kubernetes.io/projected/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-kube-api-access-fmhrt\") pod \"neutron-db-create-7rttj\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.805652 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bp8j\" (UniqueName: \"kubernetes.io/projected/337e4f54-165a-4b51-90f9-32c8599bc1e3-kube-api-access-7bp8j\") pod \"neutron-bf3c-account-create-update-5lf2k\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:20 crc kubenswrapper[4982]: I0123 08:16:20.950962 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.017685 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.017876 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.166035 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qdxbw"] Jan 23 08:16:21 crc kubenswrapper[4982]: W0123 08:16:21.172225 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc972f61_262c_4435_b54a_5705e0c00c8c.slice/crio-f1b1aeb41092ccc59286c7a3c33f0c031ccb326f4ac4fd56ae9e22e7432405bc WatchSource:0}: Error finding container f1b1aeb41092ccc59286c7a3c33f0c031ccb326f4ac4fd56ae9e22e7432405bc: Status 404 returned error can't find the container with id f1b1aeb41092ccc59286c7a3c33f0c031ccb326f4ac4fd56ae9e22e7432405bc Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.180468 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r4ns4"] Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.225076 4982 generic.go:334] "Generic (PLEG): container finished" podID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerID="8e607fb51626f06450e23fff97474e342c07efaa679050b28c97bc8461e03f11" exitCode=0 Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.225147 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" event={"ID":"882ab76e-9dbd-497e-83a1-62f571bb0426","Type":"ContainerDied","Data":"8e607fb51626f06450e23fff97474e342c07efaa679050b28c97bc8461e03f11"} Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.225177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" event={"ID":"882ab76e-9dbd-497e-83a1-62f571bb0426","Type":"ContainerStarted","Data":"d13b1d90b90cf27d7e1dfa1d0224bcb103415fe0d9239f7466c7f6a77814b515"} Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.242304 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qdxbw" event={"ID":"dc972f61-262c-4435-b54a-5705e0c00c8c","Type":"ContainerStarted","Data":"f1b1aeb41092ccc59286c7a3c33f0c031ccb326f4ac4fd56ae9e22e7432405bc"} Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.242287 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-796m8" podUID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerName="dnsmasq-dns" containerID="cri-o://9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126" gracePeriod=10 Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.311427 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2642-account-create-update-29gw9"] Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.389170 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1720-account-create-update-5xfq9"] Jan 23 08:16:21 crc kubenswrapper[4982]: W0123 08:16:21.418393 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f789991_fa7a_45f6_a59e_9148eef61221.slice/crio-477cd7aced03354d6784bab5c7a1e16a1ba55bcfa8e2832a484f45feb1ae002e WatchSource:0}: Error finding container 477cd7aced03354d6784bab5c7a1e16a1ba55bcfa8e2832a484f45feb1ae002e: Status 404 returned error can't find the container with id 477cd7aced03354d6784bab5c7a1e16a1ba55bcfa8e2832a484f45feb1ae002e Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.513250 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4xcgz"] Jan 23 08:16:21 crc kubenswrapper[4982]: W0123 08:16:21.529480 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2714e6c8_2065_4423_8249_a5b382970a81.slice/crio-81ac6266de0d51877f1dd6116d433da66c9bdcae970a79dda8405dc655d768b7 WatchSource:0}: Error finding container 81ac6266de0d51877f1dd6116d433da66c9bdcae970a79dda8405dc655d768b7: Status 404 returned error can't find the container with id 81ac6266de0d51877f1dd6116d433da66c9bdcae970a79dda8405dc655d768b7 Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.718972 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bf3c-account-create-update-5lf2k"] Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.761758 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7rttj"] Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.839744 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.924779 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl87b\" (UniqueName: \"kubernetes.io/projected/b4de881e-111c-4670-bc32-0c647e9ca69e-kube-api-access-kl87b\") pod \"b4de881e-111c-4670-bc32-0c647e9ca69e\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.924872 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-sb\") pod \"b4de881e-111c-4670-bc32-0c647e9ca69e\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.924903 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-swift-storage-0\") pod \"b4de881e-111c-4670-bc32-0c647e9ca69e\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.924933 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-nb\") pod \"b4de881e-111c-4670-bc32-0c647e9ca69e\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.925043 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-config\") pod \"b4de881e-111c-4670-bc32-0c647e9ca69e\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " Jan 23 08:16:21 crc kubenswrapper[4982]: I0123 08:16:21.925103 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-svc\") pod \"b4de881e-111c-4670-bc32-0c647e9ca69e\" (UID: \"b4de881e-111c-4670-bc32-0c647e9ca69e\") " Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.019788 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4de881e-111c-4670-bc32-0c647e9ca69e-kube-api-access-kl87b" (OuterVolumeSpecName: "kube-api-access-kl87b") pod "b4de881e-111c-4670-bc32-0c647e9ca69e" (UID: "b4de881e-111c-4670-bc32-0c647e9ca69e"). InnerVolumeSpecName "kube-api-access-kl87b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.032054 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl87b\" (UniqueName: \"kubernetes.io/projected/b4de881e-111c-4670-bc32-0c647e9ca69e-kube-api-access-kl87b\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.265528 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" event={"ID":"882ab76e-9dbd-497e-83a1-62f571bb0426","Type":"ContainerStarted","Data":"cbb90b7e071130cf99043ac6d0b66388f04986b012fcd87f66c3166cd81d5528"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.265788 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.267272 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2642-account-create-update-29gw9" event={"ID":"5df7e23c-1217-4b12-b821-ddd7de9bbf97","Type":"ContainerStarted","Data":"1084e86a463c1d4264f9f8b01b115bd2a0f698d97f3fbe21d0cdc83bcf455e68"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.267319 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2642-account-create-update-29gw9" event={"ID":"5df7e23c-1217-4b12-b821-ddd7de9bbf97","Type":"ContainerStarted","Data":"70b9afbaa99bc45a5ccde68be713b4a0aa82229cb73e76554cc9535314fdf6cf"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.268664 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf3c-account-create-update-5lf2k" event={"ID":"337e4f54-165a-4b51-90f9-32c8599bc1e3","Type":"ContainerStarted","Data":"278db28a0cf7930d96b180ee9b125e8ce2867ebb8d42cacddc4ba45fa62b07b1"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.270571 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rttj" event={"ID":"06516bc7-9cda-4228-98b9-4e52fcd3bfa1","Type":"ContainerStarted","Data":"dc8b072804674d9500d70e05d1ee3ac4ea871b40887a8b3e93ed8a7206e42c2f"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.272158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r4ns4" event={"ID":"3a32c4a5-531d-4ce6-a01e-eb3889d85751","Type":"ContainerStarted","Data":"6f1225f7fc8c432b1c7f7b6be201999745ea5c060b7676ab99a9cee253928782"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.273423 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xcgz" event={"ID":"2714e6c8-2065-4423-8249-a5b382970a81","Type":"ContainerStarted","Data":"81ac6266de0d51877f1dd6116d433da66c9bdcae970a79dda8405dc655d768b7"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.274935 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1720-account-create-update-5xfq9" event={"ID":"2f789991-fa7a-45f6-a59e-9148eef61221","Type":"ContainerStarted","Data":"15e6f505315b1c8001f1aa5174ed3d2b374f4b8baa37a83772a2ff9200f5a655"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.274956 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1720-account-create-update-5xfq9" event={"ID":"2f789991-fa7a-45f6-a59e-9148eef61221","Type":"ContainerStarted","Data":"477cd7aced03354d6784bab5c7a1e16a1ba55bcfa8e2832a484f45feb1ae002e"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.286227 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qdxbw" event={"ID":"dc972f61-262c-4435-b54a-5705e0c00c8c","Type":"ContainerStarted","Data":"2a92e87a0f81ea0565e71c6d4b8c5e7c34d799fb7d29c296c158ddef3d95a041"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.295569 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" podStartSLOduration=3.29555024 podStartE2EDuration="3.29555024s" podCreationTimestamp="2026-01-23 08:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:22.289891 +0000 UTC m=+1256.770254386" watchObservedRunningTime="2026-01-23 08:16:22.29555024 +0000 UTC m=+1256.775913616" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.311633 4982 generic.go:334] "Generic (PLEG): container finished" podID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerID="9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126" exitCode=0 Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.311684 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-796m8" event={"ID":"b4de881e-111c-4670-bc32-0c647e9ca69e","Type":"ContainerDied","Data":"9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.311712 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-796m8" event={"ID":"b4de881e-111c-4670-bc32-0c647e9ca69e","Type":"ContainerDied","Data":"b5bcc1d0977083fb728a8609238f20b6a271b9dc623b505e28d43373d8eb1fd8"} Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.311732 4982 scope.go:117] "RemoveContainer" containerID="9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.311976 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-796m8" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.331786 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qdxbw" podStartSLOduration=2.33176257 podStartE2EDuration="2.33176257s" podCreationTimestamp="2026-01-23 08:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:22.31025264 +0000 UTC m=+1256.790616036" watchObservedRunningTime="2026-01-23 08:16:22.33176257 +0000 UTC m=+1256.812125956" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.361443 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-1720-account-create-update-5xfq9" podStartSLOduration=2.361423247 podStartE2EDuration="2.361423247s" podCreationTimestamp="2026-01-23 08:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:22.326779588 +0000 UTC m=+1256.807142974" watchObservedRunningTime="2026-01-23 08:16:22.361423247 +0000 UTC m=+1256.841786633" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.395108 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2642-account-create-update-29gw9" podStartSLOduration=2.395080779 podStartE2EDuration="2.395080779s" podCreationTimestamp="2026-01-23 08:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:22.343473331 +0000 UTC m=+1256.823836717" watchObservedRunningTime="2026-01-23 08:16:22.395080779 +0000 UTC m=+1256.875444165" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.684508 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-config" (OuterVolumeSpecName: "config") pod "b4de881e-111c-4670-bc32-0c647e9ca69e" (UID: "b4de881e-111c-4670-bc32-0c647e9ca69e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.701089 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b4de881e-111c-4670-bc32-0c647e9ca69e" (UID: "b4de881e-111c-4670-bc32-0c647e9ca69e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.702422 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b4de881e-111c-4670-bc32-0c647e9ca69e" (UID: "b4de881e-111c-4670-bc32-0c647e9ca69e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.724789 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b4de881e-111c-4670-bc32-0c647e9ca69e" (UID: "b4de881e-111c-4670-bc32-0c647e9ca69e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.730081 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b4de881e-111c-4670-bc32-0c647e9ca69e" (UID: "b4de881e-111c-4670-bc32-0c647e9ca69e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.762091 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.762123 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.762136 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.762146 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.762155 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4de881e-111c-4670-bc32-0c647e9ca69e-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.818943 4982 scope.go:117] "RemoveContainer" containerID="123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.868784 4982 scope.go:117] "RemoveContainer" containerID="9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126" Jan 23 08:16:22 crc kubenswrapper[4982]: E0123 08:16:22.872983 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126\": container with ID starting with 9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126 not found: ID does not exist" containerID="9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.873038 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126"} err="failed to get container status \"9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126\": rpc error: code = NotFound desc = could not find container \"9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126\": container with ID starting with 9379a335e3e20f2d18195dc6b1509b452f902daa2e77d1ac64086ac6754b9126 not found: ID does not exist" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.873071 4982 scope.go:117] "RemoveContainer" containerID="123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84" Jan 23 08:16:22 crc kubenswrapper[4982]: E0123 08:16:22.876706 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84\": container with ID starting with 123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84 not found: ID does not exist" containerID="123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.876767 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84"} err="failed to get container status \"123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84\": rpc error: code = NotFound desc = could not find container \"123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84\": container with ID starting with 123ce294d812f3a60fa174f4529985f114249e830c54453e58d221840908fd84 not found: ID does not exist" Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.981696 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-796m8"] Jan 23 08:16:22 crc kubenswrapper[4982]: I0123 08:16:22.988692 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-796m8"] Jan 23 08:16:23 crc kubenswrapper[4982]: I0123 08:16:23.328844 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rttj" event={"ID":"06516bc7-9cda-4228-98b9-4e52fcd3bfa1","Type":"ContainerStarted","Data":"6440efd302531e40a8f133fa95e19b67d505d0b86e33f518e7d4d7f7fcc9d03d"} Jan 23 08:16:23 crc kubenswrapper[4982]: I0123 08:16:23.330084 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r4ns4" event={"ID":"3a32c4a5-531d-4ce6-a01e-eb3889d85751","Type":"ContainerStarted","Data":"261cba7e7891939fcd41a39d9b55b43d04b45f8f7082417d66f929108ba417c9"} Jan 23 08:16:23 crc kubenswrapper[4982]: I0123 08:16:23.842368 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4de881e-111c-4670-bc32-0c647e9ca69e" path="/var/lib/kubelet/pods/b4de881e-111c-4670-bc32-0c647e9ca69e/volumes" Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.346601 4982 generic.go:334] "Generic (PLEG): container finished" podID="3a32c4a5-531d-4ce6-a01e-eb3889d85751" containerID="261cba7e7891939fcd41a39d9b55b43d04b45f8f7082417d66f929108ba417c9" exitCode=0 Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.346744 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r4ns4" event={"ID":"3a32c4a5-531d-4ce6-a01e-eb3889d85751","Type":"ContainerDied","Data":"261cba7e7891939fcd41a39d9b55b43d04b45f8f7082417d66f929108ba417c9"} Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.351455 4982 generic.go:334] "Generic (PLEG): container finished" podID="5df7e23c-1217-4b12-b821-ddd7de9bbf97" containerID="1084e86a463c1d4264f9f8b01b115bd2a0f698d97f3fbe21d0cdc83bcf455e68" exitCode=0 Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.351539 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2642-account-create-update-29gw9" event={"ID":"5df7e23c-1217-4b12-b821-ddd7de9bbf97","Type":"ContainerDied","Data":"1084e86a463c1d4264f9f8b01b115bd2a0f698d97f3fbe21d0cdc83bcf455e68"} Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.353730 4982 generic.go:334] "Generic (PLEG): container finished" podID="337e4f54-165a-4b51-90f9-32c8599bc1e3" containerID="fc5aed5043540bc597a43450b8e378498c0ebb4cf006a6e80b6393e90b942401" exitCode=0 Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.353807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf3c-account-create-update-5lf2k" event={"ID":"337e4f54-165a-4b51-90f9-32c8599bc1e3","Type":"ContainerDied","Data":"fc5aed5043540bc597a43450b8e378498c0ebb4cf006a6e80b6393e90b942401"} Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.359561 4982 generic.go:334] "Generic (PLEG): container finished" podID="2f789991-fa7a-45f6-a59e-9148eef61221" containerID="15e6f505315b1c8001f1aa5174ed3d2b374f4b8baa37a83772a2ff9200f5a655" exitCode=0 Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.359718 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1720-account-create-update-5xfq9" event={"ID":"2f789991-fa7a-45f6-a59e-9148eef61221","Type":"ContainerDied","Data":"15e6f505315b1c8001f1aa5174ed3d2b374f4b8baa37a83772a2ff9200f5a655"} Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.364299 4982 generic.go:334] "Generic (PLEG): container finished" podID="dc972f61-262c-4435-b54a-5705e0c00c8c" containerID="2a92e87a0f81ea0565e71c6d4b8c5e7c34d799fb7d29c296c158ddef3d95a041" exitCode=0 Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.364373 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qdxbw" event={"ID":"dc972f61-262c-4435-b54a-5705e0c00c8c","Type":"ContainerDied","Data":"2a92e87a0f81ea0565e71c6d4b8c5e7c34d799fb7d29c296c158ddef3d95a041"} Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.366763 4982 generic.go:334] "Generic (PLEG): container finished" podID="06516bc7-9cda-4228-98b9-4e52fcd3bfa1" containerID="6440efd302531e40a8f133fa95e19b67d505d0b86e33f518e7d4d7f7fcc9d03d" exitCode=0 Jan 23 08:16:24 crc kubenswrapper[4982]: I0123 08:16:24.366819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rttj" event={"ID":"06516bc7-9cda-4228-98b9-4e52fcd3bfa1","Type":"ContainerDied","Data":"6440efd302531e40a8f133fa95e19b67d505d0b86e33f518e7d4d7f7fcc9d03d"} Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.277902 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.327230 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bp8j\" (UniqueName: \"kubernetes.io/projected/337e4f54-165a-4b51-90f9-32c8599bc1e3-kube-api-access-7bp8j\") pod \"337e4f54-165a-4b51-90f9-32c8599bc1e3\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.327373 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/337e4f54-165a-4b51-90f9-32c8599bc1e3-operator-scripts\") pod \"337e4f54-165a-4b51-90f9-32c8599bc1e3\" (UID: \"337e4f54-165a-4b51-90f9-32c8599bc1e3\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.328201 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/337e4f54-165a-4b51-90f9-32c8599bc1e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "337e4f54-165a-4b51-90f9-32c8599bc1e3" (UID: "337e4f54-165a-4b51-90f9-32c8599bc1e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.340591 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/337e4f54-165a-4b51-90f9-32c8599bc1e3-kube-api-access-7bp8j" (OuterVolumeSpecName: "kube-api-access-7bp8j") pod "337e4f54-165a-4b51-90f9-32c8599bc1e3" (UID: "337e4f54-165a-4b51-90f9-32c8599bc1e3"). InnerVolumeSpecName "kube-api-access-7bp8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.389528 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf3c-account-create-update-5lf2k" event={"ID":"337e4f54-165a-4b51-90f9-32c8599bc1e3","Type":"ContainerDied","Data":"278db28a0cf7930d96b180ee9b125e8ce2867ebb8d42cacddc4ba45fa62b07b1"} Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.389881 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278db28a0cf7930d96b180ee9b125e8ce2867ebb8d42cacddc4ba45fa62b07b1" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.389707 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf3c-account-create-update-5lf2k" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.402476 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7rttj" event={"ID":"06516bc7-9cda-4228-98b9-4e52fcd3bfa1","Type":"ContainerDied","Data":"dc8b072804674d9500d70e05d1ee3ac4ea871b40887a8b3e93ed8a7206e42c2f"} Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.402527 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8b072804674d9500d70e05d1ee3ac4ea871b40887a8b3e93ed8a7206e42c2f" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.416546 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.429122 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmhrt\" (UniqueName: \"kubernetes.io/projected/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-kube-api-access-fmhrt\") pod \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.429306 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-operator-scripts\") pod \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\" (UID: \"06516bc7-9cda-4228-98b9-4e52fcd3bfa1\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.429775 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bp8j\" (UniqueName: \"kubernetes.io/projected/337e4f54-165a-4b51-90f9-32c8599bc1e3-kube-api-access-7bp8j\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.429794 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/337e4f54-165a-4b51-90f9-32c8599bc1e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.430311 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06516bc7-9cda-4228-98b9-4e52fcd3bfa1" (UID: "06516bc7-9cda-4228-98b9-4e52fcd3bfa1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.470956 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-kube-api-access-fmhrt" (OuterVolumeSpecName: "kube-api-access-fmhrt") pod "06516bc7-9cda-4228-98b9-4e52fcd3bfa1" (UID: "06516bc7-9cda-4228-98b9-4e52fcd3bfa1"). InnerVolumeSpecName "kube-api-access-fmhrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.531959 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmhrt\" (UniqueName: \"kubernetes.io/projected/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-kube-api-access-fmhrt\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.531986 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06516bc7-9cda-4228-98b9-4e52fcd3bfa1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.537973 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.544592 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.555103 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.583494 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736127 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc972f61-262c-4435-b54a-5705e0c00c8c-operator-scripts\") pod \"dc972f61-262c-4435-b54a-5705e0c00c8c\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736259 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f789991-fa7a-45f6-a59e-9148eef61221-operator-scripts\") pod \"2f789991-fa7a-45f6-a59e-9148eef61221\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsd77\" (UniqueName: \"kubernetes.io/projected/3a32c4a5-531d-4ce6-a01e-eb3889d85751-kube-api-access-nsd77\") pod \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736414 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6v8r\" (UniqueName: \"kubernetes.io/projected/2f789991-fa7a-45f6-a59e-9148eef61221-kube-api-access-g6v8r\") pod \"2f789991-fa7a-45f6-a59e-9148eef61221\" (UID: \"2f789991-fa7a-45f6-a59e-9148eef61221\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736455 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mnl4\" (UniqueName: \"kubernetes.io/projected/5df7e23c-1217-4b12-b821-ddd7de9bbf97-kube-api-access-4mnl4\") pod \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736507 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a32c4a5-531d-4ce6-a01e-eb3889d85751-operator-scripts\") pod \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\" (UID: \"3a32c4a5-531d-4ce6-a01e-eb3889d85751\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736573 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mmgg\" (UniqueName: \"kubernetes.io/projected/dc972f61-262c-4435-b54a-5705e0c00c8c-kube-api-access-7mmgg\") pod \"dc972f61-262c-4435-b54a-5705e0c00c8c\" (UID: \"dc972f61-262c-4435-b54a-5705e0c00c8c\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.736773 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df7e23c-1217-4b12-b821-ddd7de9bbf97-operator-scripts\") pod \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\" (UID: \"5df7e23c-1217-4b12-b821-ddd7de9bbf97\") " Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.737294 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f789991-fa7a-45f6-a59e-9148eef61221-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f789991-fa7a-45f6-a59e-9148eef61221" (UID: "2f789991-fa7a-45f6-a59e-9148eef61221"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.737888 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df7e23c-1217-4b12-b821-ddd7de9bbf97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5df7e23c-1217-4b12-b821-ddd7de9bbf97" (UID: "5df7e23c-1217-4b12-b821-ddd7de9bbf97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.738006 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc972f61-262c-4435-b54a-5705e0c00c8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc972f61-262c-4435-b54a-5705e0c00c8c" (UID: "dc972f61-262c-4435-b54a-5705e0c00c8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.738143 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a32c4a5-531d-4ce6-a01e-eb3889d85751-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a32c4a5-531d-4ce6-a01e-eb3889d85751" (UID: "3a32c4a5-531d-4ce6-a01e-eb3889d85751"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.742763 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc972f61-262c-4435-b54a-5705e0c00c8c-kube-api-access-7mmgg" (OuterVolumeSpecName: "kube-api-access-7mmgg") pod "dc972f61-262c-4435-b54a-5705e0c00c8c" (UID: "dc972f61-262c-4435-b54a-5705e0c00c8c"). InnerVolumeSpecName "kube-api-access-7mmgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.743051 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a32c4a5-531d-4ce6-a01e-eb3889d85751-kube-api-access-nsd77" (OuterVolumeSpecName: "kube-api-access-nsd77") pod "3a32c4a5-531d-4ce6-a01e-eb3889d85751" (UID: "3a32c4a5-531d-4ce6-a01e-eb3889d85751"). InnerVolumeSpecName "kube-api-access-nsd77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.743684 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f789991-fa7a-45f6-a59e-9148eef61221-kube-api-access-g6v8r" (OuterVolumeSpecName: "kube-api-access-g6v8r") pod "2f789991-fa7a-45f6-a59e-9148eef61221" (UID: "2f789991-fa7a-45f6-a59e-9148eef61221"). InnerVolumeSpecName "kube-api-access-g6v8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.744581 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df7e23c-1217-4b12-b821-ddd7de9bbf97-kube-api-access-4mnl4" (OuterVolumeSpecName: "kube-api-access-4mnl4") pod "5df7e23c-1217-4b12-b821-ddd7de9bbf97" (UID: "5df7e23c-1217-4b12-b821-ddd7de9bbf97"). InnerVolumeSpecName "kube-api-access-4mnl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840425 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mnl4\" (UniqueName: \"kubernetes.io/projected/5df7e23c-1217-4b12-b821-ddd7de9bbf97-kube-api-access-4mnl4\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840492 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a32c4a5-531d-4ce6-a01e-eb3889d85751-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840512 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mmgg\" (UniqueName: \"kubernetes.io/projected/dc972f61-262c-4435-b54a-5705e0c00c8c-kube-api-access-7mmgg\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840531 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df7e23c-1217-4b12-b821-ddd7de9bbf97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840551 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc972f61-262c-4435-b54a-5705e0c00c8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840570 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f789991-fa7a-45f6-a59e-9148eef61221-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840591 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsd77\" (UniqueName: \"kubernetes.io/projected/3a32c4a5-531d-4ce6-a01e-eb3889d85751-kube-api-access-nsd77\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:26 crc kubenswrapper[4982]: I0123 08:16:26.840611 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6v8r\" (UniqueName: \"kubernetes.io/projected/2f789991-fa7a-45f6-a59e-9148eef61221-kube-api-access-g6v8r\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.420559 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2642-account-create-update-29gw9" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.420557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2642-account-create-update-29gw9" event={"ID":"5df7e23c-1217-4b12-b821-ddd7de9bbf97","Type":"ContainerDied","Data":"70b9afbaa99bc45a5ccde68be713b4a0aa82229cb73e76554cc9535314fdf6cf"} Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.422924 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b9afbaa99bc45a5ccde68be713b4a0aa82229cb73e76554cc9535314fdf6cf" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.423267 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1720-account-create-update-5xfq9" event={"ID":"2f789991-fa7a-45f6-a59e-9148eef61221","Type":"ContainerDied","Data":"477cd7aced03354d6784bab5c7a1e16a1ba55bcfa8e2832a484f45feb1ae002e"} Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.423350 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477cd7aced03354d6784bab5c7a1e16a1ba55bcfa8e2832a484f45feb1ae002e" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.423389 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1720-account-create-update-5xfq9" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.425054 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qdxbw" event={"ID":"dc972f61-262c-4435-b54a-5705e0c00c8c","Type":"ContainerDied","Data":"f1b1aeb41092ccc59286c7a3c33f0c031ccb326f4ac4fd56ae9e22e7432405bc"} Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.425104 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b1aeb41092ccc59286c7a3c33f0c031ccb326f4ac4fd56ae9e22e7432405bc" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.425499 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qdxbw" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.426701 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r4ns4" event={"ID":"3a32c4a5-531d-4ce6-a01e-eb3889d85751","Type":"ContainerDied","Data":"6f1225f7fc8c432b1c7f7b6be201999745ea5c060b7676ab99a9cee253928782"} Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.426736 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7rttj" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.426754 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r4ns4" Jan 23 08:16:27 crc kubenswrapper[4982]: I0123 08:16:27.426756 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f1225f7fc8c432b1c7f7b6be201999745ea5c060b7676ab99a9cee253928782" Jan 23 08:16:30 crc kubenswrapper[4982]: I0123 08:16:30.013895 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:16:30 crc kubenswrapper[4982]: I0123 08:16:30.081576 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-89w5f"] Jan 23 08:16:30 crc kubenswrapper[4982]: I0123 08:16:30.081914 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" podUID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerName="dnsmasq-dns" containerID="cri-o://72960fc5fa18b86c29b0f7bc101aa575bc0b09dacc769f230125984648aa5d4c" gracePeriod=10 Jan 23 08:16:30 crc kubenswrapper[4982]: I0123 08:16:30.462087 4982 generic.go:334] "Generic (PLEG): container finished" podID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerID="72960fc5fa18b86c29b0f7bc101aa575bc0b09dacc769f230125984648aa5d4c" exitCode=0 Jan 23 08:16:30 crc kubenswrapper[4982]: I0123 08:16:30.462155 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" event={"ID":"f61e33e5-edf4-4387-92ff-f96f7b0b898c","Type":"ContainerDied","Data":"72960fc5fa18b86c29b0f7bc101aa575bc0b09dacc769f230125984648aa5d4c"} Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.132744 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.170146 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-nb\") pod \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.170203 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2nkl\" (UniqueName: \"kubernetes.io/projected/f61e33e5-edf4-4387-92ff-f96f7b0b898c-kube-api-access-r2nkl\") pod \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.170236 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-dns-svc\") pod \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.170299 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-config\") pod \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.170358 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-sb\") pod \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\" (UID: \"f61e33e5-edf4-4387-92ff-f96f7b0b898c\") " Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.190133 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61e33e5-edf4-4387-92ff-f96f7b0b898c-kube-api-access-r2nkl" (OuterVolumeSpecName: "kube-api-access-r2nkl") pod "f61e33e5-edf4-4387-92ff-f96f7b0b898c" (UID: "f61e33e5-edf4-4387-92ff-f96f7b0b898c"). InnerVolumeSpecName "kube-api-access-r2nkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.259700 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f61e33e5-edf4-4387-92ff-f96f7b0b898c" (UID: "f61e33e5-edf4-4387-92ff-f96f7b0b898c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.263716 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-config" (OuterVolumeSpecName: "config") pod "f61e33e5-edf4-4387-92ff-f96f7b0b898c" (UID: "f61e33e5-edf4-4387-92ff-f96f7b0b898c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.264417 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f61e33e5-edf4-4387-92ff-f96f7b0b898c" (UID: "f61e33e5-edf4-4387-92ff-f96f7b0b898c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.272287 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.272339 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2nkl\" (UniqueName: \"kubernetes.io/projected/f61e33e5-edf4-4387-92ff-f96f7b0b898c-kube-api-access-r2nkl\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.272356 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.272368 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.280462 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f61e33e5-edf4-4387-92ff-f96f7b0b898c" (UID: "f61e33e5-edf4-4387-92ff-f96f7b0b898c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.372975 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f61e33e5-edf4-4387-92ff-f96f7b0b898c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.490644 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" event={"ID":"f61e33e5-edf4-4387-92ff-f96f7b0b898c","Type":"ContainerDied","Data":"e9332a188cb6e2e0cd788cf158ed8912ae3846dbadeed2db1a52359a6877389b"} Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.490708 4982 scope.go:117] "RemoveContainer" containerID="72960fc5fa18b86c29b0f7bc101aa575bc0b09dacc769f230125984648aa5d4c" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.490854 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-89w5f" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.503276 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xcgz" event={"ID":"2714e6c8-2065-4423-8249-a5b382970a81","Type":"ContainerStarted","Data":"35d559689a418b4577df8ebd068ae7c6762997056d9969b9fcfea00766c8a2a0"} Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.530997 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4xcgz" podStartSLOduration=2.333338372 podStartE2EDuration="12.530972754s" podCreationTimestamp="2026-01-23 08:16:20 +0000 UTC" firstStartedPulling="2026-01-23 08:16:21.534598084 +0000 UTC m=+1256.014961470" lastFinishedPulling="2026-01-23 08:16:31.732232466 +0000 UTC m=+1266.212595852" observedRunningTime="2026-01-23 08:16:32.515528154 +0000 UTC m=+1266.995891550" watchObservedRunningTime="2026-01-23 08:16:32.530972754 +0000 UTC m=+1267.011336140" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.532738 4982 scope.go:117] "RemoveContainer" containerID="4c9d47cd1cf364a2919e7fcb3f539cf3073d5acd20adc967ae18ae7a51dca66d" Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.545262 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-89w5f"] Jan 23 08:16:32 crc kubenswrapper[4982]: I0123 08:16:32.555000 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-89w5f"] Jan 23 08:16:33 crc kubenswrapper[4982]: I0123 08:16:33.841461 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" path="/var/lib/kubelet/pods/f61e33e5-edf4-4387-92ff-f96f7b0b898c/volumes" Jan 23 08:16:37 crc kubenswrapper[4982]: I0123 08:16:37.580011 4982 generic.go:334] "Generic (PLEG): container finished" podID="2714e6c8-2065-4423-8249-a5b382970a81" containerID="35d559689a418b4577df8ebd068ae7c6762997056d9969b9fcfea00766c8a2a0" exitCode=0 Jan 23 08:16:37 crc kubenswrapper[4982]: I0123 08:16:37.580064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xcgz" event={"ID":"2714e6c8-2065-4423-8249-a5b382970a81","Type":"ContainerDied","Data":"35d559689a418b4577df8ebd068ae7c6762997056d9969b9fcfea00766c8a2a0"} Jan 23 08:16:38 crc kubenswrapper[4982]: I0123 08:16:38.930665 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.000830 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbfgt\" (UniqueName: \"kubernetes.io/projected/2714e6c8-2065-4423-8249-a5b382970a81-kube-api-access-sbfgt\") pod \"2714e6c8-2065-4423-8249-a5b382970a81\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.000893 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-config-data\") pod \"2714e6c8-2065-4423-8249-a5b382970a81\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.000960 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-combined-ca-bundle\") pod \"2714e6c8-2065-4423-8249-a5b382970a81\" (UID: \"2714e6c8-2065-4423-8249-a5b382970a81\") " Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.013456 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2714e6c8-2065-4423-8249-a5b382970a81-kube-api-access-sbfgt" (OuterVolumeSpecName: "kube-api-access-sbfgt") pod "2714e6c8-2065-4423-8249-a5b382970a81" (UID: "2714e6c8-2065-4423-8249-a5b382970a81"). InnerVolumeSpecName "kube-api-access-sbfgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.030643 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2714e6c8-2065-4423-8249-a5b382970a81" (UID: "2714e6c8-2065-4423-8249-a5b382970a81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.065654 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-config-data" (OuterVolumeSpecName: "config-data") pod "2714e6c8-2065-4423-8249-a5b382970a81" (UID: "2714e6c8-2065-4423-8249-a5b382970a81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.102917 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbfgt\" (UniqueName: \"kubernetes.io/projected/2714e6c8-2065-4423-8249-a5b382970a81-kube-api-access-sbfgt\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.103189 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.103198 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2714e6c8-2065-4423-8249-a5b382970a81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.611097 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4xcgz" event={"ID":"2714e6c8-2065-4423-8249-a5b382970a81","Type":"ContainerDied","Data":"81ac6266de0d51877f1dd6116d433da66c9bdcae970a79dda8405dc655d768b7"} Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.611137 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ac6266de0d51877f1dd6116d433da66c9bdcae970a79dda8405dc655d768b7" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.611191 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4xcgz" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.903943 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-spl2q"] Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904343 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerName="init" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904360 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerName="init" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904372 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerName="init" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904379 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerName="init" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904389 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2714e6c8-2065-4423-8249-a5b382970a81" containerName="keystone-db-sync" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904398 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2714e6c8-2065-4423-8249-a5b382970a81" containerName="keystone-db-sync" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904409 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerName="dnsmasq-dns" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904416 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerName="dnsmasq-dns" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904430 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f789991-fa7a-45f6-a59e-9148eef61221" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904437 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f789991-fa7a-45f6-a59e-9148eef61221" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904447 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df7e23c-1217-4b12-b821-ddd7de9bbf97" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904455 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df7e23c-1217-4b12-b821-ddd7de9bbf97" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904468 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a32c4a5-531d-4ce6-a01e-eb3889d85751" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904474 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a32c4a5-531d-4ce6-a01e-eb3889d85751" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904483 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerName="dnsmasq-dns" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904489 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerName="dnsmasq-dns" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904510 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337e4f54-165a-4b51-90f9-32c8599bc1e3" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904517 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="337e4f54-165a-4b51-90f9-32c8599bc1e3" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904530 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc972f61-262c-4435-b54a-5705e0c00c8c" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904537 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc972f61-262c-4435-b54a-5705e0c00c8c" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: E0123 08:16:39.904545 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06516bc7-9cda-4228-98b9-4e52fcd3bfa1" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.904552 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="06516bc7-9cda-4228-98b9-4e52fcd3bfa1" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906805 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="337e4f54-165a-4b51-90f9-32c8599bc1e3" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906841 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4de881e-111c-4670-bc32-0c647e9ca69e" containerName="dnsmasq-dns" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906854 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a32c4a5-531d-4ce6-a01e-eb3889d85751" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906869 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df7e23c-1217-4b12-b821-ddd7de9bbf97" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906878 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc972f61-262c-4435-b54a-5705e0c00c8c" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906885 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="06516bc7-9cda-4228-98b9-4e52fcd3bfa1" containerName="mariadb-database-create" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906898 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2714e6c8-2065-4423-8249-a5b382970a81" containerName="keystone-db-sync" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906907 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61e33e5-edf4-4387-92ff-f96f7b0b898c" containerName="dnsmasq-dns" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.906916 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f789991-fa7a-45f6-a59e-9148eef61221" containerName="mariadb-account-create-update" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.908464 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.916677 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.916828 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zhwks" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.920212 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.920701 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.921201 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.925610 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-spl2q"] Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.973340 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-fwmmn"] Jan 23 08:16:39 crc kubenswrapper[4982]: I0123 08:16:39.975429 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.008098 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-fwmmn"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030531 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-scripts\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030591 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vkgt\" (UniqueName: \"kubernetes.io/projected/64085667-21f6-4a45-bb12-fe00c4e80220-kube-api-access-4vkgt\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030680 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjssw\" (UniqueName: \"kubernetes.io/projected/76957c55-7bf3-4dca-ac3b-7a5413292627-kube-api-access-jjssw\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030704 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-fernet-keys\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030727 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-credential-keys\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030757 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-config\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030792 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030850 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030937 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-combined-ca-bundle\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.030962 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.031043 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.031072 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-config-data\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141337 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjssw\" (UniqueName: \"kubernetes.io/projected/76957c55-7bf3-4dca-ac3b-7a5413292627-kube-api-access-jjssw\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-fernet-keys\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141733 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-credential-keys\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141759 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-config\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141782 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141840 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141868 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-combined-ca-bundle\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141893 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.141973 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-config-data\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.142017 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-scripts\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.142045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vkgt\" (UniqueName: \"kubernetes.io/projected/64085667-21f6-4a45-bb12-fe00c4e80220-kube-api-access-4vkgt\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.143082 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.145744 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.146558 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.147204 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-config\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.147896 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.149016 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-combined-ca-bundle\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.150132 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-config-data\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.153424 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-credential-keys\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.162394 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-fernet-keys\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.162801 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-scripts\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.222591 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vkgt\" (UniqueName: \"kubernetes.io/projected/64085667-21f6-4a45-bb12-fe00c4e80220-kube-api-access-4vkgt\") pod \"dnsmasq-dns-5fdbfbc95f-fwmmn\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.284300 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjssw\" (UniqueName: \"kubernetes.io/projected/76957c55-7bf3-4dca-ac3b-7a5413292627-kube-api-access-jjssw\") pod \"keystone-bootstrap-spl2q\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.322043 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.464769 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-f2tkd"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.468615 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.485100 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fdz72" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.485370 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.500908 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.540106 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-f2tkd"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.551290 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.577176 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-config\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.577501 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbxl\" (UniqueName: \"kubernetes.io/projected/e046e631-fa39-4492-8774-1976548076d7-kube-api-access-fdbxl\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.577588 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-combined-ca-bundle\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.583731 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mj296"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.586124 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.590989 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9s4bz" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.594057 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.639719 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.642932 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.650418 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.670478 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mj296"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.673984 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.685991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686077 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-combined-ca-bundle\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686117 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686193 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8h6w\" (UniqueName: \"kubernetes.io/projected/87959597-4d4c-4063-9bcc-a91c597281f6-kube-api-access-k8h6w\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbxl\" (UniqueName: \"kubernetes.io/projected/e046e631-fa39-4492-8774-1976548076d7-kube-api-access-fdbxl\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686261 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-run-httpd\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686299 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkgq\" (UniqueName: \"kubernetes.io/projected/be893495-d5e8-4fb0-9d5d-16e26954e2aa-kube-api-access-gdkgq\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-combined-ca-bundle\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686415 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-scripts\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686462 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-config\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-config-data\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686523 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-db-sync-config-data\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.686557 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-log-httpd\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.714023 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-combined-ca-bundle\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.724471 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-config\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.758001 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.780182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbxl\" (UniqueName: \"kubernetes.io/projected/e046e631-fa39-4492-8774-1976548076d7-kube-api-access-fdbxl\") pod \"neutron-db-sync-f2tkd\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.789185 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4wwwj"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.790481 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793027 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793093 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8h6w\" (UniqueName: \"kubernetes.io/projected/87959597-4d4c-4063-9bcc-a91c597281f6-kube-api-access-k8h6w\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793121 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-run-httpd\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdkgq\" (UniqueName: \"kubernetes.io/projected/be893495-d5e8-4fb0-9d5d-16e26954e2aa-kube-api-access-gdkgq\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793203 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-scripts\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793234 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-config-data\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793251 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-db-sync-config-data\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-log-httpd\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793287 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.793308 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-combined-ca-bundle\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.802318 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-combined-ca-bundle\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.804474 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-scripts\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.804794 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-run-httpd\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.804965 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.805291 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-log-httpd\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.809066 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w2jsh" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.809098 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.810799 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.811899 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-config-data\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.823478 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.828182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-db-sync-config-data\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.836044 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8h6w\" (UniqueName: \"kubernetes.io/projected/87959597-4d4c-4063-9bcc-a91c597281f6-kube-api-access-k8h6w\") pod \"barbican-db-sync-mj296\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.837160 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.839432 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-fwmmn"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.849772 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdkgq\" (UniqueName: \"kubernetes.io/projected/be893495-d5e8-4fb0-9d5d-16e26954e2aa-kube-api-access-gdkgq\") pod \"ceilometer-0\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " pod="openstack/ceilometer-0" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.874878 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4wwwj"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.894835 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-combined-ca-bundle\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.895073 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp9hr\" (UniqueName: \"kubernetes.io/projected/c680d8da-c090-4c7e-8068-baecb59402e8-kube-api-access-rp9hr\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.895165 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-config-data\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.895250 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c680d8da-c090-4c7e-8068-baecb59402e8-etc-machine-id\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.895410 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-db-sync-config-data\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.895528 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-scripts\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.896458 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-7lnhw"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.897854 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.904818 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r7snf" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.904987 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.905142 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.912249 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mj296" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.934955 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7lnhw"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.965966 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-snh6x"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.968141 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.974733 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-snh6x"] Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.997337 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-logs\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.997591 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcm4w\" (UniqueName: \"kubernetes.io/projected/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-kube-api-access-xcm4w\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.997703 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.997827 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-db-sync-config-data\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.997905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-config-data\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.997999 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxpn\" (UniqueName: \"kubernetes.io/projected/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-kube-api-access-vrxpn\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-combined-ca-bundle\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998171 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-scripts\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998279 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998456 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998603 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-combined-ca-bundle\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998713 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp9hr\" (UniqueName: \"kubernetes.io/projected/c680d8da-c090-4c7e-8068-baecb59402e8-kube-api-access-rp9hr\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998791 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.998965 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-scripts\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.999033 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-config-data\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.999098 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c680d8da-c090-4c7e-8068-baecb59402e8-etc-machine-id\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:40 crc kubenswrapper[4982]: I0123 08:16:40.999179 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-config\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.000273 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c680d8da-c090-4c7e-8068-baecb59402e8-etc-machine-id\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.004110 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-config-data\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.004500 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-db-sync-config-data\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.013874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-combined-ca-bundle\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.014697 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-scripts\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.028180 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.035473 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp9hr\" (UniqueName: \"kubernetes.io/projected/c680d8da-c090-4c7e-8068-baecb59402e8-kube-api-access-rp9hr\") pod \"cinder-db-sync-4wwwj\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.080122 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.083175 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.087183 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.087445 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.087785 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cdkmd" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.095936 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.101351 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-scripts\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103138 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-config\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103274 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-logs\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103390 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcm4w\" (UniqueName: \"kubernetes.io/projected/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-kube-api-access-xcm4w\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103449 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103524 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-config-data\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103573 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-combined-ca-bundle\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103604 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxpn\" (UniqueName: \"kubernetes.io/projected/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-kube-api-access-vrxpn\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.103895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.104408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.105194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.108311 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-config\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.108374 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.108995 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.109755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.114527 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.120010 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-logs\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.123535 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-config-data\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.125193 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxpn\" (UniqueName: \"kubernetes.io/projected/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-kube-api-access-vrxpn\") pod \"dnsmasq-dns-6f6f8cb849-snh6x\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.134949 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-scripts\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.137502 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcm4w\" (UniqueName: \"kubernetes.io/projected/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-kube-api-access-xcm4w\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.146704 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.171753 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-combined-ca-bundle\") pod \"placement-db-sync-7lnhw\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207755 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207784 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207807 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bstf\" (UniqueName: \"kubernetes.io/projected/9a9cbd51-1e1b-45a7-883d-4776946b77fb-kube-api-access-6bstf\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207838 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207878 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207918 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-logs\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.207999 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.213530 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.225495 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.225679 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.233573 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.234327 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310472 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-logs\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310593 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310653 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310679 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310744 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bstf\" (UniqueName: \"kubernetes.io/projected/9a9cbd51-1e1b-45a7-883d-4776946b77fb-kube-api-access-6bstf\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310772 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.310806 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.311449 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-logs\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.312267 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.312327 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.329852 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.330172 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-scripts\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.331956 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.332504 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bstf\" (UniqueName: \"kubernetes.io/projected/9a9cbd51-1e1b-45a7-883d-4776946b77fb-kube-api-access-6bstf\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.334422 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-config-data\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.358179 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7lnhw" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.363935 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.373005 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.416340 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417578 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417636 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417659 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417696 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417740 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417761 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lnjl\" (UniqueName: \"kubernetes.io/projected/6ba84dc3-3721-4ad1-b84f-398a54711193-kube-api-access-7lnjl\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417782 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.417810 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521167 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521258 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521375 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521465 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521500 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lnjl\" (UniqueName: \"kubernetes.io/projected/6ba84dc3-3721-4ad1-b84f-398a54711193-kube-api-access-7lnjl\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521533 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.521594 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.522879 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.526010 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.528229 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-logs\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.534086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.537333 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.537804 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.539418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.544557 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lnjl\" (UniqueName: \"kubernetes.io/projected/6ba84dc3-3721-4ad1-b84f-398a54711193-kube-api-access-7lnjl\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.600237 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.670187 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-fwmmn"] Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.697860 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" event={"ID":"64085667-21f6-4a45-bb12-fe00c4e80220","Type":"ContainerStarted","Data":"0ae1179265bbf5fadda92ec36c4fadc7cdd03e09093b7ea8d3a7723a4569b14e"} Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.784322 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-spl2q"] Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.842094 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.955593 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-f2tkd"] Jan 23 08:16:41 crc kubenswrapper[4982]: I0123 08:16:41.970199 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mj296"] Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.043149 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.075365 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4wwwj"] Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.255368 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-7lnhw"] Jan 23 08:16:42 crc kubenswrapper[4982]: W0123 08:16:42.273288 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ace387_f81f_4fe6_a4ca_3b34b6ea158d.slice/crio-fb51513241cd77a23a0e735746d4567de78d3482609f5401b8e0d40349020228 WatchSource:0}: Error finding container fb51513241cd77a23a0e735746d4567de78d3482609f5401b8e0d40349020228: Status 404 returned error can't find the container with id fb51513241cd77a23a0e735746d4567de78d3482609f5401b8e0d40349020228 Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.379730 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-snh6x"] Jan 23 08:16:42 crc kubenswrapper[4982]: W0123 08:16:42.433690 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25c1411_c0cb_4eb1_a7da_d5a6a91125c8.slice/crio-eac936fa19ba6d8dd50417595052eea38d99e06f1a2ab4b37cc9347a4a93fc19 WatchSource:0}: Error finding container eac936fa19ba6d8dd50417595052eea38d99e06f1a2ab4b37cc9347a4a93fc19: Status 404 returned error can't find the container with id eac936fa19ba6d8dd50417595052eea38d99e06f1a2ab4b37cc9347a4a93fc19 Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.566384 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:42 crc kubenswrapper[4982]: W0123 08:16:42.578312 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a9cbd51_1e1b_45a7_883d_4776946b77fb.slice/crio-b1779cc7fdb1617769c71308ca6e1ab377c7acbb2b410aeedaf0247405b79754 WatchSource:0}: Error finding container b1779cc7fdb1617769c71308ca6e1ab377c7acbb2b410aeedaf0247405b79754: Status 404 returned error can't find the container with id b1779cc7fdb1617769c71308ca6e1ab377c7acbb2b410aeedaf0247405b79754 Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.683438 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.719356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mj296" event={"ID":"87959597-4d4c-4063-9bcc-a91c597281f6","Type":"ContainerStarted","Data":"a64c1a7342a50d29359dd156a55a3109ac32030402c1bf1ef0cee63b6006215c"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.722182 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wwwj" event={"ID":"c680d8da-c090-4c7e-8068-baecb59402e8","Type":"ContainerStarted","Data":"6aa1b2e33da844c29a331edad2ef6922ba960cf4851b672ceac249544d1bc8e9"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.739921 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a9cbd51-1e1b-45a7-883d-4776946b77fb","Type":"ContainerStarted","Data":"b1779cc7fdb1617769c71308ca6e1ab377c7acbb2b410aeedaf0247405b79754"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.773740 4982 generic.go:334] "Generic (PLEG): container finished" podID="64085667-21f6-4a45-bb12-fe00c4e80220" containerID="b8cad73950e66a34b5248d0045ca9a724971ea5dad0d6350543a477fbf64f028" exitCode=0 Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.773891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" event={"ID":"64085667-21f6-4a45-bb12-fe00c4e80220","Type":"ContainerDied","Data":"b8cad73950e66a34b5248d0045ca9a724971ea5dad0d6350543a477fbf64f028"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.797283 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7lnhw" event={"ID":"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d","Type":"ContainerStarted","Data":"fb51513241cd77a23a0e735746d4567de78d3482609f5401b8e0d40349020228"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.803741 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f2tkd" event={"ID":"e046e631-fa39-4492-8774-1976548076d7","Type":"ContainerStarted","Data":"4808f5576fa92a9d9ec349f9566ab6d666670b6c27cde7f6f22033657da5a1b8"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.803819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f2tkd" event={"ID":"e046e631-fa39-4492-8774-1976548076d7","Type":"ContainerStarted","Data":"7ac53f39ba82b126a8712291df968b0ab448d71cf578df68f0f4c41ac64f9789"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.822835 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerStarted","Data":"e2ccdba3268f9d4aa2eb99275c2a6fd55c9c2d191b3a098c7a25926811979523"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.826088 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-spl2q" event={"ID":"76957c55-7bf3-4dca-ac3b-7a5413292627","Type":"ContainerStarted","Data":"64abd397b9bbafd314344b7fea5fc608e939576a618d014cf3584a03cc7dc57b"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.826136 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-spl2q" event={"ID":"76957c55-7bf3-4dca-ac3b-7a5413292627","Type":"ContainerStarted","Data":"b25a6e1f62c6b75c153da6ce5bf0b153e1f25d0d28800078fc451c227b96e5af"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.835526 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-f2tkd" podStartSLOduration=2.83550675 podStartE2EDuration="2.83550675s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:42.82117693 +0000 UTC m=+1277.301540316" watchObservedRunningTime="2026-01-23 08:16:42.83550675 +0000 UTC m=+1277.315870146" Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.845065 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ba84dc3-3721-4ad1-b84f-398a54711193","Type":"ContainerStarted","Data":"041bca5328eab54b02e7571a3ae3577fa0e2c9e27777f23b277ec5909071c915"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.854104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" event={"ID":"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8","Type":"ContainerStarted","Data":"c2150f58239ac372fd377b4d3bbc84d469edffb22e213778f97549f263ab4312"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.854165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" event={"ID":"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8","Type":"ContainerStarted","Data":"eac936fa19ba6d8dd50417595052eea38d99e06f1a2ab4b37cc9347a4a93fc19"} Jan 23 08:16:42 crc kubenswrapper[4982]: I0123 08:16:42.857352 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-spl2q" podStartSLOduration=3.8573304779999997 podStartE2EDuration="3.857330478s" podCreationTimestamp="2026-01-23 08:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:42.854696528 +0000 UTC m=+1277.335059914" watchObservedRunningTime="2026-01-23 08:16:42.857330478 +0000 UTC m=+1277.337693864" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.329293 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.598641 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-nb\") pod \"64085667-21f6-4a45-bb12-fe00c4e80220\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.598758 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-swift-storage-0\") pod \"64085667-21f6-4a45-bb12-fe00c4e80220\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.598828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-svc\") pod \"64085667-21f6-4a45-bb12-fe00c4e80220\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.598876 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vkgt\" (UniqueName: \"kubernetes.io/projected/64085667-21f6-4a45-bb12-fe00c4e80220-kube-api-access-4vkgt\") pod \"64085667-21f6-4a45-bb12-fe00c4e80220\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.598920 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-sb\") pod \"64085667-21f6-4a45-bb12-fe00c4e80220\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.599037 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-config\") pod \"64085667-21f6-4a45-bb12-fe00c4e80220\" (UID: \"64085667-21f6-4a45-bb12-fe00c4e80220\") " Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.644819 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64085667-21f6-4a45-bb12-fe00c4e80220-kube-api-access-4vkgt" (OuterVolumeSpecName: "kube-api-access-4vkgt") pod "64085667-21f6-4a45-bb12-fe00c4e80220" (UID: "64085667-21f6-4a45-bb12-fe00c4e80220"). InnerVolumeSpecName "kube-api-access-4vkgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.694787 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64085667-21f6-4a45-bb12-fe00c4e80220" (UID: "64085667-21f6-4a45-bb12-fe00c4e80220"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.697526 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64085667-21f6-4a45-bb12-fe00c4e80220" (UID: "64085667-21f6-4a45-bb12-fe00c4e80220"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.702161 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vkgt\" (UniqueName: \"kubernetes.io/projected/64085667-21f6-4a45-bb12-fe00c4e80220-kube-api-access-4vkgt\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.702205 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.702217 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.717480 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64085667-21f6-4a45-bb12-fe00c4e80220" (UID: "64085667-21f6-4a45-bb12-fe00c4e80220"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.719206 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-config" (OuterVolumeSpecName: "config") pod "64085667-21f6-4a45-bb12-fe00c4e80220" (UID: "64085667-21f6-4a45-bb12-fe00c4e80220"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.730009 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "64085667-21f6-4a45-bb12-fe00c4e80220" (UID: "64085667-21f6-4a45-bb12-fe00c4e80220"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.808050 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.808089 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.808101 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64085667-21f6-4a45-bb12-fe00c4e80220-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.912566 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" event={"ID":"64085667-21f6-4a45-bb12-fe00c4e80220","Type":"ContainerDied","Data":"0ae1179265bbf5fadda92ec36c4fadc7cdd03e09093b7ea8d3a7723a4569b14e"} Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.912750 4982 scope.go:117] "RemoveContainer" containerID="b8cad73950e66a34b5248d0045ca9a724971ea5dad0d6350543a477fbf64f028" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.921363 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-fwmmn" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.924363 4982 generic.go:334] "Generic (PLEG): container finished" podID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerID="c2150f58239ac372fd377b4d3bbc84d469edffb22e213778f97549f263ab4312" exitCode=0 Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.924458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" event={"ID":"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8","Type":"ContainerDied","Data":"c2150f58239ac372fd377b4d3bbc84d469edffb22e213778f97549f263ab4312"} Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.924498 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" event={"ID":"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8","Type":"ContainerStarted","Data":"4f417dd4e55ae658952be9e4ba27cdadd79bbc44d62877a77b42a23640f293e9"} Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.926293 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:43 crc kubenswrapper[4982]: I0123 08:16:43.951334 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" podStartSLOduration=3.9513081850000003 podStartE2EDuration="3.951308185s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:43.946355903 +0000 UTC m=+1278.426719309" watchObservedRunningTime="2026-01-23 08:16:43.951308185 +0000 UTC m=+1278.431671571" Jan 23 08:16:44 crc kubenswrapper[4982]: I0123 08:16:44.004293 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-fwmmn"] Jan 23 08:16:44 crc kubenswrapper[4982]: I0123 08:16:44.054912 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-fwmmn"] Jan 23 08:16:44 crc kubenswrapper[4982]: I0123 08:16:44.589978 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:44 crc kubenswrapper[4982]: I0123 08:16:44.656368 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:44 crc kubenswrapper[4982]: I0123 08:16:44.675753 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:16:44 crc kubenswrapper[4982]: I0123 08:16:44.956478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ba84dc3-3721-4ad1-b84f-398a54711193","Type":"ContainerStarted","Data":"e9728bbde2026dc3bf2244bf060c645ce84823215059e9342809cb44bd74606b"} Jan 23 08:16:44 crc kubenswrapper[4982]: I0123 08:16:44.970781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a9cbd51-1e1b-45a7-883d-4776946b77fb","Type":"ContainerStarted","Data":"84b90f5629abee693a8ca367a320fa063116f1e6f64f2de228e8063786a41b20"} Jan 23 08:16:45 crc kubenswrapper[4982]: I0123 08:16:45.848685 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64085667-21f6-4a45-bb12-fe00c4e80220" path="/var/lib/kubelet/pods/64085667-21f6-4a45-bb12-fe00c4e80220/volumes" Jan 23 08:16:46 crc kubenswrapper[4982]: I0123 08:16:46.033404 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a9cbd51-1e1b-45a7-883d-4776946b77fb","Type":"ContainerStarted","Data":"ba9385154422401d5d89a6b05836066a22e82889de7934a151a3f5fe6ec7a4f4"} Jan 23 08:16:46 crc kubenswrapper[4982]: I0123 08:16:46.033430 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-log" containerID="cri-o://84b90f5629abee693a8ca367a320fa063116f1e6f64f2de228e8063786a41b20" gracePeriod=30 Jan 23 08:16:46 crc kubenswrapper[4982]: I0123 08:16:46.033547 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-httpd" containerID="cri-o://ba9385154422401d5d89a6b05836066a22e82889de7934a151a3f5fe6ec7a4f4" gracePeriod=30 Jan 23 08:16:46 crc kubenswrapper[4982]: I0123 08:16:46.073742 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.073713658 podStartE2EDuration="6.073713658s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:46.062511641 +0000 UTC m=+1280.542875027" watchObservedRunningTime="2026-01-23 08:16:46.073713658 +0000 UTC m=+1280.554077054" Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.050135 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ba84dc3-3721-4ad1-b84f-398a54711193","Type":"ContainerStarted","Data":"174327e131f4b5e9a73bd611c27d3610d589397be96a2473087b3274a95680d4"} Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.050314 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-log" containerID="cri-o://e9728bbde2026dc3bf2244bf060c645ce84823215059e9342809cb44bd74606b" gracePeriod=30 Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.050395 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-httpd" containerID="cri-o://174327e131f4b5e9a73bd611c27d3610d589397be96a2473087b3274a95680d4" gracePeriod=30 Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.055321 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerID="ba9385154422401d5d89a6b05836066a22e82889de7934a151a3f5fe6ec7a4f4" exitCode=0 Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.055349 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerID="84b90f5629abee693a8ca367a320fa063116f1e6f64f2de228e8063786a41b20" exitCode=143 Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.055370 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a9cbd51-1e1b-45a7-883d-4776946b77fb","Type":"ContainerDied","Data":"ba9385154422401d5d89a6b05836066a22e82889de7934a151a3f5fe6ec7a4f4"} Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.055397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a9cbd51-1e1b-45a7-883d-4776946b77fb","Type":"ContainerDied","Data":"84b90f5629abee693a8ca367a320fa063116f1e6f64f2de228e8063786a41b20"} Jan 23 08:16:47 crc kubenswrapper[4982]: I0123 08:16:47.073911 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.073893667 podStartE2EDuration="7.073893667s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:16:47.072657084 +0000 UTC m=+1281.553020480" watchObservedRunningTime="2026-01-23 08:16:47.073893667 +0000 UTC m=+1281.554257053" Jan 23 08:16:48 crc kubenswrapper[4982]: I0123 08:16:48.075056 4982 generic.go:334] "Generic (PLEG): container finished" podID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerID="174327e131f4b5e9a73bd611c27d3610d589397be96a2473087b3274a95680d4" exitCode=0 Jan 23 08:16:48 crc kubenswrapper[4982]: I0123 08:16:48.075458 4982 generic.go:334] "Generic (PLEG): container finished" podID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerID="e9728bbde2026dc3bf2244bf060c645ce84823215059e9342809cb44bd74606b" exitCode=143 Jan 23 08:16:48 crc kubenswrapper[4982]: I0123 08:16:48.075140 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ba84dc3-3721-4ad1-b84f-398a54711193","Type":"ContainerDied","Data":"174327e131f4b5e9a73bd611c27d3610d589397be96a2473087b3274a95680d4"} Jan 23 08:16:48 crc kubenswrapper[4982]: I0123 08:16:48.075538 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ba84dc3-3721-4ad1-b84f-398a54711193","Type":"ContainerDied","Data":"e9728bbde2026dc3bf2244bf060c645ce84823215059e9342809cb44bd74606b"} Jan 23 08:16:48 crc kubenswrapper[4982]: I0123 08:16:48.077559 4982 generic.go:334] "Generic (PLEG): container finished" podID="76957c55-7bf3-4dca-ac3b-7a5413292627" containerID="64abd397b9bbafd314344b7fea5fc608e939576a618d014cf3584a03cc7dc57b" exitCode=0 Jan 23 08:16:48 crc kubenswrapper[4982]: I0123 08:16:48.077605 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-spl2q" event={"ID":"76957c55-7bf3-4dca-ac3b-7a5413292627","Type":"ContainerDied","Data":"64abd397b9bbafd314344b7fea5fc608e939576a618d014cf3584a03cc7dc57b"} Jan 23 08:16:49 crc kubenswrapper[4982]: I0123 08:16:49.894168 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.021559 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-config-data\") pod \"76957c55-7bf3-4dca-ac3b-7a5413292627\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.021966 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-combined-ca-bundle\") pod \"76957c55-7bf3-4dca-ac3b-7a5413292627\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.022033 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjssw\" (UniqueName: \"kubernetes.io/projected/76957c55-7bf3-4dca-ac3b-7a5413292627-kube-api-access-jjssw\") pod \"76957c55-7bf3-4dca-ac3b-7a5413292627\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.022108 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-scripts\") pod \"76957c55-7bf3-4dca-ac3b-7a5413292627\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.022158 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-fernet-keys\") pod \"76957c55-7bf3-4dca-ac3b-7a5413292627\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.022193 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-credential-keys\") pod \"76957c55-7bf3-4dca-ac3b-7a5413292627\" (UID: \"76957c55-7bf3-4dca-ac3b-7a5413292627\") " Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.031913 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "76957c55-7bf3-4dca-ac3b-7a5413292627" (UID: "76957c55-7bf3-4dca-ac3b-7a5413292627"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.044545 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "76957c55-7bf3-4dca-ac3b-7a5413292627" (UID: "76957c55-7bf3-4dca-ac3b-7a5413292627"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.046449 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-scripts" (OuterVolumeSpecName: "scripts") pod "76957c55-7bf3-4dca-ac3b-7a5413292627" (UID: "76957c55-7bf3-4dca-ac3b-7a5413292627"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.049108 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76957c55-7bf3-4dca-ac3b-7a5413292627-kube-api-access-jjssw" (OuterVolumeSpecName: "kube-api-access-jjssw") pod "76957c55-7bf3-4dca-ac3b-7a5413292627" (UID: "76957c55-7bf3-4dca-ac3b-7a5413292627"). InnerVolumeSpecName "kube-api-access-jjssw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.081211 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-config-data" (OuterVolumeSpecName: "config-data") pod "76957c55-7bf3-4dca-ac3b-7a5413292627" (UID: "76957c55-7bf3-4dca-ac3b-7a5413292627"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.087103 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76957c55-7bf3-4dca-ac3b-7a5413292627" (UID: "76957c55-7bf3-4dca-ac3b-7a5413292627"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.124159 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.124195 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjssw\" (UniqueName: \"kubernetes.io/projected/76957c55-7bf3-4dca-ac3b-7a5413292627-kube-api-access-jjssw\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.124204 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.124214 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.124222 4982 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.124230 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76957c55-7bf3-4dca-ac3b-7a5413292627-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.139189 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-spl2q" event={"ID":"76957c55-7bf3-4dca-ac3b-7a5413292627","Type":"ContainerDied","Data":"b25a6e1f62c6b75c153da6ce5bf0b153e1f25d0d28800078fc451c227b96e5af"} Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.139227 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b25a6e1f62c6b75c153da6ce5bf0b153e1f25d0d28800078fc451c227b96e5af" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.139299 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-spl2q" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.305313 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-spl2q"] Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.316332 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-spl2q"] Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.411105 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5j9ml"] Jan 23 08:16:50 crc kubenswrapper[4982]: E0123 08:16:50.411640 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64085667-21f6-4a45-bb12-fe00c4e80220" containerName="init" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.411659 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="64085667-21f6-4a45-bb12-fe00c4e80220" containerName="init" Jan 23 08:16:50 crc kubenswrapper[4982]: E0123 08:16:50.411672 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76957c55-7bf3-4dca-ac3b-7a5413292627" containerName="keystone-bootstrap" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.411680 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="76957c55-7bf3-4dca-ac3b-7a5413292627" containerName="keystone-bootstrap" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.411880 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="64085667-21f6-4a45-bb12-fe00c4e80220" containerName="init" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.411908 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="76957c55-7bf3-4dca-ac3b-7a5413292627" containerName="keystone-bootstrap" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.412745 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.416597 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.416800 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.418498 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.420036 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.420969 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5j9ml"] Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.423179 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zhwks" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.531737 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-combined-ca-bundle\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.531866 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qtk\" (UniqueName: \"kubernetes.io/projected/5455d1ce-af24-4eb1-9195-88e721380c49-kube-api-access-j2qtk\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.531980 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-credential-keys\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.532071 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-config-data\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.532241 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-fernet-keys\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.532321 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-scripts\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.634272 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-fernet-keys\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.634348 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-scripts\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.634384 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-combined-ca-bundle\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.634407 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2qtk\" (UniqueName: \"kubernetes.io/projected/5455d1ce-af24-4eb1-9195-88e721380c49-kube-api-access-j2qtk\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.634435 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-credential-keys\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.634477 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-config-data\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.640321 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-scripts\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.641042 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-combined-ca-bundle\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.642188 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-fernet-keys\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.642887 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-config-data\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.643357 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-credential-keys\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.654438 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2qtk\" (UniqueName: \"kubernetes.io/projected/5455d1ce-af24-4eb1-9195-88e721380c49-kube-api-access-j2qtk\") pod \"keystone-bootstrap-5j9ml\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:50 crc kubenswrapper[4982]: I0123 08:16:50.732997 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:16:51 crc kubenswrapper[4982]: I0123 08:16:51.376360 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:16:51 crc kubenswrapper[4982]: I0123 08:16:51.456439 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cgszc"] Jan 23 08:16:51 crc kubenswrapper[4982]: I0123 08:16:51.456702 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="dnsmasq-dns" containerID="cri-o://cbb90b7e071130cf99043ac6d0b66388f04986b012fcd87f66c3166cd81d5528" gracePeriod=10 Jan 23 08:16:51 crc kubenswrapper[4982]: I0123 08:16:51.839850 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76957c55-7bf3-4dca-ac3b-7a5413292627" path="/var/lib/kubelet/pods/76957c55-7bf3-4dca-ac3b-7a5413292627/volumes" Jan 23 08:16:53 crc kubenswrapper[4982]: I0123 08:16:53.177191 4982 generic.go:334] "Generic (PLEG): container finished" podID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerID="cbb90b7e071130cf99043ac6d0b66388f04986b012fcd87f66c3166cd81d5528" exitCode=0 Jan 23 08:16:53 crc kubenswrapper[4982]: I0123 08:16:53.177246 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" event={"ID":"882ab76e-9dbd-497e-83a1-62f571bb0426","Type":"ContainerDied","Data":"cbb90b7e071130cf99043ac6d0b66388f04986b012fcd87f66c3166cd81d5528"} Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.015825 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.629348 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.744949 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-config-data\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.745029 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-scripts\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.745110 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-combined-ca-bundle\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.745136 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-public-tls-certs\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.745184 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bstf\" (UniqueName: \"kubernetes.io/projected/9a9cbd51-1e1b-45a7-883d-4776946b77fb-kube-api-access-6bstf\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.745211 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-logs\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.745303 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-httpd-run\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.745354 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\" (UID: \"9a9cbd51-1e1b-45a7-883d-4776946b77fb\") " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.746275 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.746514 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-logs" (OuterVolumeSpecName: "logs") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.751350 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-scripts" (OuterVolumeSpecName: "scripts") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.751973 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a9cbd51-1e1b-45a7-883d-4776946b77fb-kube-api-access-6bstf" (OuterVolumeSpecName: "kube-api-access-6bstf") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "kube-api-access-6bstf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.759916 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.777713 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.800451 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.822318 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-config-data" (OuterVolumeSpecName: "config-data") pod "9a9cbd51-1e1b-45a7-883d-4776946b77fb" (UID: "9a9cbd51-1e1b-45a7-883d-4776946b77fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850243 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850296 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850307 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850317 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850325 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850335 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a9cbd51-1e1b-45a7-883d-4776946b77fb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850345 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bstf\" (UniqueName: \"kubernetes.io/projected/9a9cbd51-1e1b-45a7-883d-4776946b77fb-kube-api-access-6bstf\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.850354 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a9cbd51-1e1b-45a7-883d-4776946b77fb-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.871447 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 23 08:16:55 crc kubenswrapper[4982]: I0123 08:16:55.952210 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.214984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9a9cbd51-1e1b-45a7-883d-4776946b77fb","Type":"ContainerDied","Data":"b1779cc7fdb1617769c71308ca6e1ab377c7acbb2b410aeedaf0247405b79754"} Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.215068 4982 scope.go:117] "RemoveContainer" containerID="ba9385154422401d5d89a6b05836066a22e82889de7934a151a3f5fe6ec7a4f4" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.215084 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.251186 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.271648 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.278969 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:56 crc kubenswrapper[4982]: E0123 08:16:56.279468 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-httpd" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.279485 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-httpd" Jan 23 08:16:56 crc kubenswrapper[4982]: E0123 08:16:56.279496 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-log" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.279502 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-log" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.279755 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-httpd" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.279776 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" containerName="glance-log" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.280840 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.283560 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.283755 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.288025 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.371578 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4p64\" (UniqueName: \"kubernetes.io/projected/61395d90-ce5a-44a7-9e48-f6d151b17774-kube-api-access-d4p64\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.371705 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-logs\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.371780 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.371844 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.371913 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-scripts\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.371948 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.371972 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-config-data\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.372036 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474245 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-logs\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474358 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474421 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474488 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-scripts\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474522 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474546 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-config-data\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474604 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.474671 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4p64\" (UniqueName: \"kubernetes.io/projected/61395d90-ce5a-44a7-9e48-f6d151b17774-kube-api-access-d4p64\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.478705 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.478997 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.479538 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-logs\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.484037 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.484069 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-scripts\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.488785 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-config-data\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.492839 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.492875 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4p64\" (UniqueName: \"kubernetes.io/projected/61395d90-ce5a-44a7-9e48-f6d151b17774-kube-api-access-d4p64\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.516405 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " pod="openstack/glance-default-external-api-0" Jan 23 08:16:56 crc kubenswrapper[4982]: I0123 08:16:56.603748 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:16:57 crc kubenswrapper[4982]: I0123 08:16:57.841054 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a9cbd51-1e1b-45a7-883d-4776946b77fb" path="/var/lib/kubelet/pods/9a9cbd51-1e1b-45a7-883d-4776946b77fb/volumes" Jan 23 08:16:58 crc kubenswrapper[4982]: E0123 08:16:58.025440 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b" Jan 23 08:16:58 crc kubenswrapper[4982]: E0123 08:16:58.025780 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcm4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-7lnhw_openstack(f9ace387-f81f-4fe6-a4ca-3b34b6ea158d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:16:58 crc kubenswrapper[4982]: E0123 08:16:58.026989 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-7lnhw" podUID="f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.150669 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.207759 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.207861 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lnjl\" (UniqueName: \"kubernetes.io/projected/6ba84dc3-3721-4ad1-b84f-398a54711193-kube-api-access-7lnjl\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.207900 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-httpd-run\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.207990 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-combined-ca-bundle\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.208032 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-config-data\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.208069 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-internal-tls-certs\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.208106 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-logs\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.208178 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-scripts\") pod \"6ba84dc3-3721-4ad1-b84f-398a54711193\" (UID: \"6ba84dc3-3721-4ad1-b84f-398a54711193\") " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.210818 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.211224 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-logs" (OuterVolumeSpecName: "logs") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.217307 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.219785 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba84dc3-3721-4ad1-b84f-398a54711193-kube-api-access-7lnjl" (OuterVolumeSpecName: "kube-api-access-7lnjl") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "kube-api-access-7lnjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.226709 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-scripts" (OuterVolumeSpecName: "scripts") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.246517 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.247326 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6ba84dc3-3721-4ad1-b84f-398a54711193","Type":"ContainerDied","Data":"041bca5328eab54b02e7571a3ae3577fa0e2c9e27777f23b277ec5909071c915"} Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.247383 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: E0123 08:16:58.265796 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b\\\"\"" pod="openstack/placement-db-sync-7lnhw" podUID="f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.311089 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.311121 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lnjl\" (UniqueName: \"kubernetes.io/projected/6ba84dc3-3721-4ad1-b84f-398a54711193-kube-api-access-7lnjl\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.311132 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.311140 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.311148 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba84dc3-3721-4ad1-b84f-398a54711193-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.311155 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.330037 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.406133 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.416921 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.416957 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.440826 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-config-data" (OuterVolumeSpecName: "config-data") pod "6ba84dc3-3721-4ad1-b84f-398a54711193" (UID: "6ba84dc3-3721-4ad1-b84f-398a54711193"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.519185 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba84dc3-3721-4ad1-b84f-398a54711193-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.582239 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.592659 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.621775 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:58 crc kubenswrapper[4982]: E0123 08:16:58.622431 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-log" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.622457 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-log" Jan 23 08:16:58 crc kubenswrapper[4982]: E0123 08:16:58.622489 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-httpd" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.622497 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-httpd" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.622784 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-log" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.622807 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" containerName="glance-httpd" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.624348 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.627146 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.627309 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.644331 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.723738 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.724294 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.724376 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.724402 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmjgz\" (UniqueName: \"kubernetes.io/projected/305a601b-3073-40b7-86d8-17b02bffba9e-kube-api-access-kmjgz\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.724451 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.724473 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.724513 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.724772 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830054 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830102 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830179 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830259 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmjgz\" (UniqueName: \"kubernetes.io/projected/305a601b-3073-40b7-86d8-17b02bffba9e-kube-api-access-kmjgz\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830460 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830495 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830545 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.830830 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.831027 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-logs\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.831965 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.832411 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.835739 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.836310 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.836365 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.837752 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.850586 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmjgz\" (UniqueName: \"kubernetes.io/projected/305a601b-3073-40b7-86d8-17b02bffba9e-kube-api-access-kmjgz\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.867466 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:16:58 crc kubenswrapper[4982]: I0123 08:16:58.955190 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:16:59 crc kubenswrapper[4982]: I0123 08:16:59.841180 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba84dc3-3721-4ad1-b84f-398a54711193" path="/var/lib/kubelet/pods/6ba84dc3-3721-4ad1-b84f-398a54711193/volumes" Jan 23 08:17:05 crc kubenswrapper[4982]: I0123 08:17:05.014372 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: i/o timeout" Jan 23 08:17:09 crc kubenswrapper[4982]: E0123 08:17:09.893200 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 23 08:17:09 crc kubenswrapper[4982]: E0123 08:17:09.893738 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8h6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-mj296_openstack(87959597-4d4c-4063-9bcc-a91c597281f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:17:09 crc kubenswrapper[4982]: E0123 08:17:09.895198 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-mj296" podUID="87959597-4d4c-4063-9bcc-a91c597281f6" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.013008 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.015837 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: i/o timeout" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.015929 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.166904 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8qrf\" (UniqueName: \"kubernetes.io/projected/882ab76e-9dbd-497e-83a1-62f571bb0426-kube-api-access-q8qrf\") pod \"882ab76e-9dbd-497e-83a1-62f571bb0426\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.167062 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-sb\") pod \"882ab76e-9dbd-497e-83a1-62f571bb0426\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.167088 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-svc\") pod \"882ab76e-9dbd-497e-83a1-62f571bb0426\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.167217 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-swift-storage-0\") pod \"882ab76e-9dbd-497e-83a1-62f571bb0426\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.167284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-nb\") pod \"882ab76e-9dbd-497e-83a1-62f571bb0426\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.167352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-config\") pod \"882ab76e-9dbd-497e-83a1-62f571bb0426\" (UID: \"882ab76e-9dbd-497e-83a1-62f571bb0426\") " Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.174881 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882ab76e-9dbd-497e-83a1-62f571bb0426-kube-api-access-q8qrf" (OuterVolumeSpecName: "kube-api-access-q8qrf") pod "882ab76e-9dbd-497e-83a1-62f571bb0426" (UID: "882ab76e-9dbd-497e-83a1-62f571bb0426"). InnerVolumeSpecName "kube-api-access-q8qrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.232743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "882ab76e-9dbd-497e-83a1-62f571bb0426" (UID: "882ab76e-9dbd-497e-83a1-62f571bb0426"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.239677 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "882ab76e-9dbd-497e-83a1-62f571bb0426" (UID: "882ab76e-9dbd-497e-83a1-62f571bb0426"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.243385 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-config" (OuterVolumeSpecName: "config") pod "882ab76e-9dbd-497e-83a1-62f571bb0426" (UID: "882ab76e-9dbd-497e-83a1-62f571bb0426"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.246517 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "882ab76e-9dbd-497e-83a1-62f571bb0426" (UID: "882ab76e-9dbd-497e-83a1-62f571bb0426"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.254703 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "882ab76e-9dbd-497e-83a1-62f571bb0426" (UID: "882ab76e-9dbd-497e-83a1-62f571bb0426"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.271699 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.271761 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.271773 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.271785 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.271794 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882ab76e-9dbd-497e-83a1-62f571bb0426-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.271802 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8qrf\" (UniqueName: \"kubernetes.io/projected/882ab76e-9dbd-497e-83a1-62f571bb0426-kube-api-access-q8qrf\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.391469 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" event={"ID":"882ab76e-9dbd-497e-83a1-62f571bb0426","Type":"ContainerDied","Data":"d13b1d90b90cf27d7e1dfa1d0224bcb103415fe0d9239f7466c7f6a77814b515"} Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.391553 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dfc89d77-cgszc" Jan 23 08:17:10 crc kubenswrapper[4982]: E0123 08:17:10.394669 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-mj296" podUID="87959597-4d4c-4063-9bcc-a91c597281f6" Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.444073 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cgszc"] Jan 23 08:17:10 crc kubenswrapper[4982]: I0123 08:17:10.454709 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dfc89d77-cgszc"] Jan 23 08:17:11 crc kubenswrapper[4982]: I0123 08:17:11.839335 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" path="/var/lib/kubelet/pods/882ab76e-9dbd-497e-83a1-62f571bb0426/volumes" Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.067521 4982 scope.go:117] "RemoveContainer" containerID="84b90f5629abee693a8ca367a320fa063116f1e6f64f2de228e8063786a41b20" Jan 23 08:17:12 crc kubenswrapper[4982]: E0123 08:17:12.073865 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 23 08:17:12 crc kubenswrapper[4982]: E0123 08:17:12.074084 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rp9hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4wwwj_openstack(c680d8da-c090-4c7e-8068-baecb59402e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 08:17:12 crc kubenswrapper[4982]: E0123 08:17:12.075276 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4wwwj" podUID="c680d8da-c090-4c7e-8068-baecb59402e8" Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.372470 4982 scope.go:117] "RemoveContainer" containerID="174327e131f4b5e9a73bd611c27d3610d589397be96a2473087b3274a95680d4" Jan 23 08:17:12 crc kubenswrapper[4982]: E0123 08:17:12.487983 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-4wwwj" podUID="c680d8da-c090-4c7e-8068-baecb59402e8" Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.489284 4982 scope.go:117] "RemoveContainer" containerID="e9728bbde2026dc3bf2244bf060c645ce84823215059e9342809cb44bd74606b" Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.522926 4982 scope.go:117] "RemoveContainer" containerID="cbb90b7e071130cf99043ac6d0b66388f04986b012fcd87f66c3166cd81d5528" Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.551425 4982 scope.go:117] "RemoveContainer" containerID="8e607fb51626f06450e23fff97474e342c07efaa679050b28c97bc8461e03f11" Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.662096 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5j9ml"] Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.695327 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:17:12 crc kubenswrapper[4982]: I0123 08:17:12.806180 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.436949 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7lnhw" event={"ID":"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d","Type":"ContainerStarted","Data":"37ff5c16cf8a0674dc0470f1bef769079110379662beaef3634cb7d4aa2c0729"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.449116 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5j9ml" event={"ID":"5455d1ce-af24-4eb1-9195-88e721380c49","Type":"ContainerStarted","Data":"61fdd6cb59e3aa39174835f28bd3d7788cf355f682f55a50923e04ba5030d9b0"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.449164 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5j9ml" event={"ID":"5455d1ce-af24-4eb1-9195-88e721380c49","Type":"ContainerStarted","Data":"e9f997675ad5168f421b89d5462073b1f19e7e0cb3126969d1cb4a8a89653489"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.453375 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"305a601b-3073-40b7-86d8-17b02bffba9e","Type":"ContainerStarted","Data":"5f49e46ef3f184cffc0a3f5151239a6c85e934892cf0f9301b74cc75f12bd907"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.453590 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"305a601b-3073-40b7-86d8-17b02bffba9e","Type":"ContainerStarted","Data":"e0078e76c9632908bc91ce1e541d7cb54698df32e9cda56c6cf131a9aafa19a5"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.465915 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61395d90-ce5a-44a7-9e48-f6d151b17774","Type":"ContainerStarted","Data":"40014c145259aa5d0d53ce81504b0a95f5a7c5981fc26df4df0e2b27a5cfe1fe"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.465982 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61395d90-ce5a-44a7-9e48-f6d151b17774","Type":"ContainerStarted","Data":"db0c504d4deb287b237b7148c270933195291a10ae23557897a050eec742c350"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.475274 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-7lnhw" podStartSLOduration=3.549085347 podStartE2EDuration="33.475248632s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="2026-01-23 08:16:42.276407436 +0000 UTC m=+1276.756770822" lastFinishedPulling="2026-01-23 08:17:12.202570721 +0000 UTC m=+1306.682934107" observedRunningTime="2026-01-23 08:17:13.460920909 +0000 UTC m=+1307.941284285" watchObservedRunningTime="2026-01-23 08:17:13.475248632 +0000 UTC m=+1307.955612038" Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.476153 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerStarted","Data":"d316e57d5485310b1a6f6f055dc8a387c24afba61c49d357ea7b7754dc65158c"} Jan 23 08:17:13 crc kubenswrapper[4982]: I0123 08:17:13.488356 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5j9ml" podStartSLOduration=23.488332321 podStartE2EDuration="23.488332321s" podCreationTimestamp="2026-01-23 08:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:13.480648556 +0000 UTC m=+1307.961011962" watchObservedRunningTime="2026-01-23 08:17:13.488332321 +0000 UTC m=+1307.968695727" Jan 23 08:17:14 crc kubenswrapper[4982]: I0123 08:17:14.489328 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61395d90-ce5a-44a7-9e48-f6d151b17774","Type":"ContainerStarted","Data":"9124ee3b4fec4679b2565ead7eeef31d586524dda44fbf7a4b3655c10fedd22f"} Jan 23 08:17:14 crc kubenswrapper[4982]: I0123 08:17:14.493619 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerStarted","Data":"968f8e2d23ec66f5cd425c5baf51077bed00c2b524b2589454244a35d15140cc"} Jan 23 08:17:14 crc kubenswrapper[4982]: I0123 08:17:14.515553 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.515532868 podStartE2EDuration="18.515532868s" podCreationTimestamp="2026-01-23 08:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:14.515478467 +0000 UTC m=+1308.995841853" watchObservedRunningTime="2026-01-23 08:17:14.515532868 +0000 UTC m=+1308.995896254" Jan 23 08:17:15 crc kubenswrapper[4982]: I0123 08:17:15.509794 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"305a601b-3073-40b7-86d8-17b02bffba9e","Type":"ContainerStarted","Data":"fa2e7ff99f086c9757ad235e8999d715b4462e740d7cca574f91d905931f959d"} Jan 23 08:17:16 crc kubenswrapper[4982]: I0123 08:17:16.522518 4982 generic.go:334] "Generic (PLEG): container finished" podID="f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" containerID="37ff5c16cf8a0674dc0470f1bef769079110379662beaef3634cb7d4aa2c0729" exitCode=0 Jan 23 08:17:16 crc kubenswrapper[4982]: I0123 08:17:16.523661 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7lnhw" event={"ID":"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d","Type":"ContainerDied","Data":"37ff5c16cf8a0674dc0470f1bef769079110379662beaef3634cb7d4aa2c0729"} Jan 23 08:17:16 crc kubenswrapper[4982]: I0123 08:17:16.553195 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=18.553177384 podStartE2EDuration="18.553177384s" podCreationTimestamp="2026-01-23 08:16:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:15.536448818 +0000 UTC m=+1310.016812204" watchObservedRunningTime="2026-01-23 08:17:16.553177384 +0000 UTC m=+1311.033540770" Jan 23 08:17:16 crc kubenswrapper[4982]: I0123 08:17:16.604690 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 08:17:16 crc kubenswrapper[4982]: I0123 08:17:16.604800 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 08:17:16 crc kubenswrapper[4982]: I0123 08:17:16.646812 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 08:17:16 crc kubenswrapper[4982]: I0123 08:17:16.652391 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 08:17:17 crc kubenswrapper[4982]: I0123 08:17:17.535990 4982 generic.go:334] "Generic (PLEG): container finished" podID="5455d1ce-af24-4eb1-9195-88e721380c49" containerID="61fdd6cb59e3aa39174835f28bd3d7788cf355f682f55a50923e04ba5030d9b0" exitCode=0 Jan 23 08:17:17 crc kubenswrapper[4982]: I0123 08:17:17.537275 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5j9ml" event={"ID":"5455d1ce-af24-4eb1-9195-88e721380c49","Type":"ContainerDied","Data":"61fdd6cb59e3aa39174835f28bd3d7788cf355f682f55a50923e04ba5030d9b0"} Jan 23 08:17:17 crc kubenswrapper[4982]: I0123 08:17:17.537311 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 08:17:17 crc kubenswrapper[4982]: I0123 08:17:17.537322 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.578980 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-7lnhw" event={"ID":"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d","Type":"ContainerDied","Data":"fb51513241cd77a23a0e735746d4567de78d3482609f5401b8e0d40349020228"} Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.579280 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb51513241cd77a23a0e735746d4567de78d3482609f5401b8e0d40349020228" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.660049 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7lnhw" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.850555 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-combined-ca-bundle\") pod \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.851419 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-logs\") pod \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.851511 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-config-data\") pod \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.851540 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-scripts\") pod \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.851570 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcm4w\" (UniqueName: \"kubernetes.io/projected/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-kube-api-access-xcm4w\") pod \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\" (UID: \"f9ace387-f81f-4fe6-a4ca-3b34b6ea158d\") " Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.852737 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-logs" (OuterVolumeSpecName: "logs") pod "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" (UID: "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.853638 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.858809 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-scripts" (OuterVolumeSpecName: "scripts") pod "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" (UID: "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.858996 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-kube-api-access-xcm4w" (OuterVolumeSpecName: "kube-api-access-xcm4w") pod "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" (UID: "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d"). InnerVolumeSpecName "kube-api-access-xcm4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.885502 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" (UID: "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.885575 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-config-data" (OuterVolumeSpecName: "config-data") pod "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" (UID: "f9ace387-f81f-4fe6-a4ca-3b34b6ea158d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.957176 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.957441 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.958484 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.958516 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.958526 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.958535 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcm4w\" (UniqueName: \"kubernetes.io/projected/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d-kube-api-access-xcm4w\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.997111 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:17:18 crc kubenswrapper[4982]: I0123 08:17:18.999509 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.020446 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.161877 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-combined-ca-bundle\") pod \"5455d1ce-af24-4eb1-9195-88e721380c49\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.162332 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-fernet-keys\") pod \"5455d1ce-af24-4eb1-9195-88e721380c49\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.162386 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-config-data\") pod \"5455d1ce-af24-4eb1-9195-88e721380c49\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.162472 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2qtk\" (UniqueName: \"kubernetes.io/projected/5455d1ce-af24-4eb1-9195-88e721380c49-kube-api-access-j2qtk\") pod \"5455d1ce-af24-4eb1-9195-88e721380c49\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.162613 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-scripts\") pod \"5455d1ce-af24-4eb1-9195-88e721380c49\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.162755 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-credential-keys\") pod \"5455d1ce-af24-4eb1-9195-88e721380c49\" (UID: \"5455d1ce-af24-4eb1-9195-88e721380c49\") " Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.172371 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5455d1ce-af24-4eb1-9195-88e721380c49" (UID: "5455d1ce-af24-4eb1-9195-88e721380c49"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.172437 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5455d1ce-af24-4eb1-9195-88e721380c49-kube-api-access-j2qtk" (OuterVolumeSpecName: "kube-api-access-j2qtk") pod "5455d1ce-af24-4eb1-9195-88e721380c49" (UID: "5455d1ce-af24-4eb1-9195-88e721380c49"). InnerVolumeSpecName "kube-api-access-j2qtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.172497 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5455d1ce-af24-4eb1-9195-88e721380c49" (UID: "5455d1ce-af24-4eb1-9195-88e721380c49"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.172660 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-scripts" (OuterVolumeSpecName: "scripts") pod "5455d1ce-af24-4eb1-9195-88e721380c49" (UID: "5455d1ce-af24-4eb1-9195-88e721380c49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.199272 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5455d1ce-af24-4eb1-9195-88e721380c49" (UID: "5455d1ce-af24-4eb1-9195-88e721380c49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.199726 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-config-data" (OuterVolumeSpecName: "config-data") pod "5455d1ce-af24-4eb1-9195-88e721380c49" (UID: "5455d1ce-af24-4eb1-9195-88e721380c49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.265539 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.265603 4982 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.265618 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.265644 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.265658 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5455d1ce-af24-4eb1-9195-88e721380c49-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.265670 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2qtk\" (UniqueName: \"kubernetes.io/projected/5455d1ce-af24-4eb1-9195-88e721380c49-kube-api-access-j2qtk\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.595847 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5j9ml" event={"ID":"5455d1ce-af24-4eb1-9195-88e721380c49","Type":"ContainerDied","Data":"e9f997675ad5168f421b89d5462073b1f19e7e0cb3126969d1cb4a8a89653489"} Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.595898 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f997675ad5168f421b89d5462073b1f19e7e0cb3126969d1cb4a8a89653489" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.595920 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5j9ml" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.608081 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerStarted","Data":"264c54476c21beb798630c02b0a4d711b7f3166029520a60ca97bb87331b40ca"} Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.608195 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-7lnhw" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.609806 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.609863 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.703555 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7cd9887484-nrtvc"] Jan 23 08:17:19 crc kubenswrapper[4982]: E0123 08:17:19.704107 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5455d1ce-af24-4eb1-9195-88e721380c49" containerName="keystone-bootstrap" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.704122 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5455d1ce-af24-4eb1-9195-88e721380c49" containerName="keystone-bootstrap" Jan 23 08:17:19 crc kubenswrapper[4982]: E0123 08:17:19.704143 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="init" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.704150 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="init" Jan 23 08:17:19 crc kubenswrapper[4982]: E0123 08:17:19.704162 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="dnsmasq-dns" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.704169 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="dnsmasq-dns" Jan 23 08:17:19 crc kubenswrapper[4982]: E0123 08:17:19.704187 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" containerName="placement-db-sync" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.704194 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" containerName="placement-db-sync" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.704400 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5455d1ce-af24-4eb1-9195-88e721380c49" containerName="keystone-bootstrap" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.704415 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" containerName="placement-db-sync" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.704433 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="882ab76e-9dbd-497e-83a1-62f571bb0426" containerName="dnsmasq-dns" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.707992 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.713710 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.713870 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.713990 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.718448 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zhwks" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.718638 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.719578 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.732357 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cd9887484-nrtvc"] Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.826849 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ddf676c6b-tgrs7"] Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.830874 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.838157 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.838542 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.838693 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.838937 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r7snf" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.839063 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.853449 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ddf676c6b-tgrs7"] Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877372 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4nvs\" (UniqueName: \"kubernetes.io/projected/b969cd53-c58b-4a97-89b8-8581874e1336-kube-api-access-l4nvs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877423 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-fernet-keys\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877452 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-credential-keys\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877478 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-public-tls-certs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877502 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-combined-ca-bundle\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877531 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-scripts\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877579 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-config-data\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.877599 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-internal-tls-certs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.978961 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4nvs\" (UniqueName: \"kubernetes.io/projected/b969cd53-c58b-4a97-89b8-8581874e1336-kube-api-access-l4nvs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.979298 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-fernet-keys\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-config-data\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980107 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-credential-keys\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980217 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-public-tls-certs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-combined-ca-bundle\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-logs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980455 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-scripts\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980600 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-scripts\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980646 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-internal-tls-certs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xs4\" (UniqueName: \"kubernetes.io/projected/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-kube-api-access-k7xs4\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980705 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-config-data\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980723 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-combined-ca-bundle\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.980743 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-internal-tls-certs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.984226 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-fernet-keys\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.984764 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-scripts\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.996461 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-public-tls-certs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.998130 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-config-data\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:19 crc kubenswrapper[4982]: I0123 08:17:19.998452 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-combined-ca-bundle\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.002246 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4nvs\" (UniqueName: \"kubernetes.io/projected/b969cd53-c58b-4a97-89b8-8581874e1336-kube-api-access-l4nvs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.002852 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-credential-keys\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.005224 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-internal-tls-certs\") pod \"keystone-7cd9887484-nrtvc\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.044383 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.083168 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-scripts\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.083315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-internal-tls-certs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.083395 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7xs4\" (UniqueName: \"kubernetes.io/projected/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-kube-api-access-k7xs4\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.083467 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-combined-ca-bundle\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.083753 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-config-data\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.083904 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.083991 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-logs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.084991 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-logs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.093061 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-combined-ca-bundle\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.101070 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-scripts\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.104138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-config-data\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.108343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7xs4\" (UniqueName: \"kubernetes.io/projected/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-kube-api-access-k7xs4\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.111062 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-internal-tls-certs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.130184 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs\") pod \"placement-ddf676c6b-tgrs7\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.178137 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.499749 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7cd9887484-nrtvc"] Jan 23 08:17:20 crc kubenswrapper[4982]: W0123 08:17:20.513783 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb969cd53_c58b_4a97_89b8_8581874e1336.slice/crio-27ef306718fb256397a2264a79fd54ec8498758f1f45d955d19b94fe60e74b81 WatchSource:0}: Error finding container 27ef306718fb256397a2264a79fd54ec8498758f1f45d955d19b94fe60e74b81: Status 404 returned error can't find the container with id 27ef306718fb256397a2264a79fd54ec8498758f1f45d955d19b94fe60e74b81 Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.651331 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cd9887484-nrtvc" event={"ID":"b969cd53-c58b-4a97-89b8-8581874e1336","Type":"ContainerStarted","Data":"27ef306718fb256397a2264a79fd54ec8498758f1f45d955d19b94fe60e74b81"} Jan 23 08:17:20 crc kubenswrapper[4982]: I0123 08:17:20.957285 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ddf676c6b-tgrs7"] Jan 23 08:17:21 crc kubenswrapper[4982]: W0123 08:17:21.155178 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5dfad61_3f03_4cb1_b9b6_6f842bf4dd1b.slice/crio-8affc164004ec0111199baa0e4455448645eebed1db16199d47ab54930681aeb WatchSource:0}: Error finding container 8affc164004ec0111199baa0e4455448645eebed1db16199d47ab54930681aeb: Status 404 returned error can't find the container with id 8affc164004ec0111199baa0e4455448645eebed1db16199d47ab54930681aeb Jan 23 08:17:21 crc kubenswrapper[4982]: I0123 08:17:21.659204 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf676c6b-tgrs7" event={"ID":"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b","Type":"ContainerStarted","Data":"8affc164004ec0111199baa0e4455448645eebed1db16199d47ab54930681aeb"} Jan 23 08:17:22 crc kubenswrapper[4982]: I0123 08:17:22.673379 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cd9887484-nrtvc" event={"ID":"b969cd53-c58b-4a97-89b8-8581874e1336","Type":"ContainerStarted","Data":"99034c7805c03b0bb5d9d7d40da49977052e28b7160c0383014a800c8a076fbc"} Jan 23 08:17:22 crc kubenswrapper[4982]: I0123 08:17:22.674117 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:22 crc kubenswrapper[4982]: I0123 08:17:22.679355 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf676c6b-tgrs7" event={"ID":"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b","Type":"ContainerStarted","Data":"aeab8722bcf476c6445e671d68cf2b7cfff2199e663f2a397789bae6c713a5ba"} Jan 23 08:17:22 crc kubenswrapper[4982]: I0123 08:17:22.703652 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7cd9887484-nrtvc" podStartSLOduration=3.703619176 podStartE2EDuration="3.703619176s" podCreationTimestamp="2026-01-23 08:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:22.698336915 +0000 UTC m=+1317.178700321" watchObservedRunningTime="2026-01-23 08:17:22.703619176 +0000 UTC m=+1317.183982562" Jan 23 08:17:23 crc kubenswrapper[4982]: I0123 08:17:23.691126 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf676c6b-tgrs7" event={"ID":"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b","Type":"ContainerStarted","Data":"3e9ea0f153b90fef5a0f09d7ff49a929afa3ab0bee839a8bf8bff02973243915"} Jan 23 08:17:23 crc kubenswrapper[4982]: I0123 08:17:23.691913 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:23 crc kubenswrapper[4982]: I0123 08:17:23.717020 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ddf676c6b-tgrs7" podStartSLOduration=4.716978303 podStartE2EDuration="4.716978303s" podCreationTimestamp="2026-01-23 08:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:23.716940202 +0000 UTC m=+1318.197303618" watchObservedRunningTime="2026-01-23 08:17:23.716978303 +0000 UTC m=+1318.197341689" Jan 23 08:17:24 crc kubenswrapper[4982]: I0123 08:17:24.704513 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:24 crc kubenswrapper[4982]: I0123 08:17:24.763457 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 08:17:24 crc kubenswrapper[4982]: I0123 08:17:24.763638 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:24 crc kubenswrapper[4982]: I0123 08:17:24.763720 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:17:24 crc kubenswrapper[4982]: I0123 08:17:24.778569 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 08:17:24 crc kubenswrapper[4982]: I0123 08:17:24.795461 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 08:17:30 crc kubenswrapper[4982]: I0123 08:17:30.776393 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mj296" event={"ID":"87959597-4d4c-4063-9bcc-a91c597281f6","Type":"ContainerStarted","Data":"d70784a22b80ca41960a42faa1acf918f4a1cd9702bca90db1a12337c2709198"} Jan 23 08:17:30 crc kubenswrapper[4982]: I0123 08:17:30.778233 4982 generic.go:334] "Generic (PLEG): container finished" podID="e046e631-fa39-4492-8774-1976548076d7" containerID="4808f5576fa92a9d9ec349f9566ab6d666670b6c27cde7f6f22033657da5a1b8" exitCode=0 Jan 23 08:17:30 crc kubenswrapper[4982]: I0123 08:17:30.778278 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f2tkd" event={"ID":"e046e631-fa39-4492-8774-1976548076d7","Type":"ContainerDied","Data":"4808f5576fa92a9d9ec349f9566ab6d666670b6c27cde7f6f22033657da5a1b8"} Jan 23 08:17:30 crc kubenswrapper[4982]: I0123 08:17:30.798320 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mj296" podStartSLOduration=8.146443788 podStartE2EDuration="50.79829979s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="2026-01-23 08:16:41.965542203 +0000 UTC m=+1276.445905589" lastFinishedPulling="2026-01-23 08:17:24.617398205 +0000 UTC m=+1319.097761591" observedRunningTime="2026-01-23 08:17:30.794397256 +0000 UTC m=+1325.274760642" watchObservedRunningTime="2026-01-23 08:17:30.79829979 +0000 UTC m=+1325.278663176" Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.790655 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerStarted","Data":"f7c8dac825b45347f3f3ab6bcae8f70e8c3dcb81a985f0632189119fac68503e"} Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.791193 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="sg-core" containerID="cri-o://264c54476c21beb798630c02b0a4d711b7f3166029520a60ca97bb87331b40ca" gracePeriod=30 Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.791286 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-notification-agent" containerID="cri-o://968f8e2d23ec66f5cd425c5baf51077bed00c2b524b2589454244a35d15140cc" gracePeriod=30 Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.791178 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="proxy-httpd" containerID="cri-o://f7c8dac825b45347f3f3ab6bcae8f70e8c3dcb81a985f0632189119fac68503e" gracePeriod=30 Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.790786 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-central-agent" containerID="cri-o://d316e57d5485310b1a6f6f055dc8a387c24afba61c49d357ea7b7754dc65158c" gracePeriod=30 Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.791531 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.795789 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wwwj" event={"ID":"c680d8da-c090-4c7e-8068-baecb59402e8","Type":"ContainerStarted","Data":"03db8693dc2242a529ba2dd928929ed07a9fba357317ff119471a64da1e30cda"} Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.826564 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.230795239 podStartE2EDuration="51.826545325s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="2026-01-23 08:16:42.109677135 +0000 UTC m=+1276.590040521" lastFinishedPulling="2026-01-23 08:17:30.705427221 +0000 UTC m=+1325.185790607" observedRunningTime="2026-01-23 08:17:31.819108116 +0000 UTC m=+1326.299471502" watchObservedRunningTime="2026-01-23 08:17:31.826545325 +0000 UTC m=+1326.306908711" Jan 23 08:17:31 crc kubenswrapper[4982]: I0123 08:17:31.844673 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4wwwj" podStartSLOduration=3.271592368 podStartE2EDuration="51.844654868s" podCreationTimestamp="2026-01-23 08:16:40 +0000 UTC" firstStartedPulling="2026-01-23 08:16:42.111027531 +0000 UTC m=+1276.591390917" lastFinishedPulling="2026-01-23 08:17:30.684090031 +0000 UTC m=+1325.164453417" observedRunningTime="2026-01-23 08:17:31.840423325 +0000 UTC m=+1326.320786731" watchObservedRunningTime="2026-01-23 08:17:31.844654868 +0000 UTC m=+1326.325018264" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.150765 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.288054 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbxl\" (UniqueName: \"kubernetes.io/projected/e046e631-fa39-4492-8774-1976548076d7-kube-api-access-fdbxl\") pod \"e046e631-fa39-4492-8774-1976548076d7\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.288488 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-config\") pod \"e046e631-fa39-4492-8774-1976548076d7\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.288551 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-combined-ca-bundle\") pod \"e046e631-fa39-4492-8774-1976548076d7\" (UID: \"e046e631-fa39-4492-8774-1976548076d7\") " Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.295487 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e046e631-fa39-4492-8774-1976548076d7-kube-api-access-fdbxl" (OuterVolumeSpecName: "kube-api-access-fdbxl") pod "e046e631-fa39-4492-8774-1976548076d7" (UID: "e046e631-fa39-4492-8774-1976548076d7"). InnerVolumeSpecName "kube-api-access-fdbxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.316727 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-config" (OuterVolumeSpecName: "config") pod "e046e631-fa39-4492-8774-1976548076d7" (UID: "e046e631-fa39-4492-8774-1976548076d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.318336 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e046e631-fa39-4492-8774-1976548076d7" (UID: "e046e631-fa39-4492-8774-1976548076d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.390550 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.390582 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e046e631-fa39-4492-8774-1976548076d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.390593 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbxl\" (UniqueName: \"kubernetes.io/projected/e046e631-fa39-4492-8774-1976548076d7-kube-api-access-fdbxl\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.807499 4982 generic.go:334] "Generic (PLEG): container finished" podID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerID="f7c8dac825b45347f3f3ab6bcae8f70e8c3dcb81a985f0632189119fac68503e" exitCode=0 Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.807526 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerDied","Data":"f7c8dac825b45347f3f3ab6bcae8f70e8c3dcb81a985f0632189119fac68503e"} Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.807557 4982 generic.go:334] "Generic (PLEG): container finished" podID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerID="264c54476c21beb798630c02b0a4d711b7f3166029520a60ca97bb87331b40ca" exitCode=2 Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.807574 4982 generic.go:334] "Generic (PLEG): container finished" podID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerID="d316e57d5485310b1a6f6f055dc8a387c24afba61c49d357ea7b7754dc65158c" exitCode=0 Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.807587 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerDied","Data":"264c54476c21beb798630c02b0a4d711b7f3166029520a60ca97bb87331b40ca"} Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.807606 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerDied","Data":"d316e57d5485310b1a6f6f055dc8a387c24afba61c49d357ea7b7754dc65158c"} Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.809890 4982 generic.go:334] "Generic (PLEG): container finished" podID="87959597-4d4c-4063-9bcc-a91c597281f6" containerID="d70784a22b80ca41960a42faa1acf918f4a1cd9702bca90db1a12337c2709198" exitCode=0 Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.809957 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mj296" event={"ID":"87959597-4d4c-4063-9bcc-a91c597281f6","Type":"ContainerDied","Data":"d70784a22b80ca41960a42faa1acf918f4a1cd9702bca90db1a12337c2709198"} Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.811546 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-f2tkd" event={"ID":"e046e631-fa39-4492-8774-1976548076d7","Type":"ContainerDied","Data":"7ac53f39ba82b126a8712291df968b0ab448d71cf578df68f0f4c41ac64f9789"} Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.811585 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-f2tkd" Jan 23 08:17:32 crc kubenswrapper[4982]: I0123 08:17:32.811604 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ac53f39ba82b126a8712291df968b0ab448d71cf578df68f0f4c41ac64f9789" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.000898 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-l4q77"] Jan 23 08:17:33 crc kubenswrapper[4982]: E0123 08:17:33.001523 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e046e631-fa39-4492-8774-1976548076d7" containerName="neutron-db-sync" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.001543 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e046e631-fa39-4492-8774-1976548076d7" containerName="neutron-db-sync" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.002134 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e046e631-fa39-4492-8774-1976548076d7" containerName="neutron-db-sync" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.003594 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.014071 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-l4q77"] Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.104420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-svc\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.104826 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nk4q\" (UniqueName: \"kubernetes.io/projected/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-kube-api-access-4nk4q\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.105025 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.105141 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.105214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.105278 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-config\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.207235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-config\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.207320 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-svc\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.207378 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nk4q\" (UniqueName: \"kubernetes.io/projected/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-kube-api-access-4nk4q\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.207403 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.207435 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.207476 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.208515 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.208827 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76d579d464-xmhp9"] Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.209066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.209105 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-svc\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.208843 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.209181 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-config\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.210587 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.213962 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-fdz72" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.214517 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.215482 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.220084 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76d579d464-xmhp9"] Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.223674 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.248861 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nk4q\" (UniqueName: \"kubernetes.io/projected/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-kube-api-access-4nk4q\") pod \"dnsmasq-dns-685444497c-l4q77\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.309230 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-httpd-config\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.309349 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-combined-ca-bundle\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.309406 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-config\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.309470 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtf5\" (UniqueName: \"kubernetes.io/projected/b5234ba0-f780-4929-b3b1-8db96026770c-kube-api-access-kwtf5\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.309600 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-ovndb-tls-certs\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.322288 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.410886 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-ovndb-tls-certs\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.410932 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-httpd-config\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.410981 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-combined-ca-bundle\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.411027 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-config\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.411057 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwtf5\" (UniqueName: \"kubernetes.io/projected/b5234ba0-f780-4929-b3b1-8db96026770c-kube-api-access-kwtf5\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.420697 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-combined-ca-bundle\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.420820 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-ovndb-tls-certs\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.421200 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-config\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.428503 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-httpd-config\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.433796 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwtf5\" (UniqueName: \"kubernetes.io/projected/b5234ba0-f780-4929-b3b1-8db96026770c-kube-api-access-kwtf5\") pod \"neutron-76d579d464-xmhp9\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.526201 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:33 crc kubenswrapper[4982]: I0123 08:17:33.895809 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-l4q77"] Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.101153 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76d579d464-xmhp9"] Jan 23 08:17:34 crc kubenswrapper[4982]: W0123 08:17:34.112827 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5234ba0_f780_4929_b3b1_8db96026770c.slice/crio-3c8b2192eb6ed4b280a7e8020c44a4474f21b9aebcadf002fb513144d1b30e24 WatchSource:0}: Error finding container 3c8b2192eb6ed4b280a7e8020c44a4474f21b9aebcadf002fb513144d1b30e24: Status 404 returned error can't find the container with id 3c8b2192eb6ed4b280a7e8020c44a4474f21b9aebcadf002fb513144d1b30e24 Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.378229 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mj296" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.445285 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8h6w\" (UniqueName: \"kubernetes.io/projected/87959597-4d4c-4063-9bcc-a91c597281f6-kube-api-access-k8h6w\") pod \"87959597-4d4c-4063-9bcc-a91c597281f6\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.445477 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-combined-ca-bundle\") pod \"87959597-4d4c-4063-9bcc-a91c597281f6\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.445591 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-db-sync-config-data\") pod \"87959597-4d4c-4063-9bcc-a91c597281f6\" (UID: \"87959597-4d4c-4063-9bcc-a91c597281f6\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.449147 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87959597-4d4c-4063-9bcc-a91c597281f6-kube-api-access-k8h6w" (OuterVolumeSpecName: "kube-api-access-k8h6w") pod "87959597-4d4c-4063-9bcc-a91c597281f6" (UID: "87959597-4d4c-4063-9bcc-a91c597281f6"). InnerVolumeSpecName "kube-api-access-k8h6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.453365 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "87959597-4d4c-4063-9bcc-a91c597281f6" (UID: "87959597-4d4c-4063-9bcc-a91c597281f6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.494747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87959597-4d4c-4063-9bcc-a91c597281f6" (UID: "87959597-4d4c-4063-9bcc-a91c597281f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.547472 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.547792 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/87959597-4d4c-4063-9bcc-a91c597281f6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.547805 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8h6w\" (UniqueName: \"kubernetes.io/projected/87959597-4d4c-4063-9bcc-a91c597281f6-kube-api-access-k8h6w\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.858603 4982 generic.go:334] "Generic (PLEG): container finished" podID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerID="968f8e2d23ec66f5cd425c5baf51077bed00c2b524b2589454244a35d15140cc" exitCode=0 Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.858689 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerDied","Data":"968f8e2d23ec66f5cd425c5baf51077bed00c2b524b2589454244a35d15140cc"} Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.863495 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76d579d464-xmhp9" event={"ID":"b5234ba0-f780-4929-b3b1-8db96026770c","Type":"ContainerStarted","Data":"b0dc2778be8e3e91a2775231145b32b7176df87ebc194db92e4d240e2fd396e1"} Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.863542 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76d579d464-xmhp9" event={"ID":"b5234ba0-f780-4929-b3b1-8db96026770c","Type":"ContainerStarted","Data":"a9cb9e8b1eabd25f0f4033246745d7188d6551abcf1f806565c3d8557778a2d4"} Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.863553 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76d579d464-xmhp9" event={"ID":"b5234ba0-f780-4929-b3b1-8db96026770c","Type":"ContainerStarted","Data":"3c8b2192eb6ed4b280a7e8020c44a4474f21b9aebcadf002fb513144d1b30e24"} Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.863829 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.865787 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mj296" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.865790 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mj296" event={"ID":"87959597-4d4c-4063-9bcc-a91c597281f6","Type":"ContainerDied","Data":"a64c1a7342a50d29359dd156a55a3109ac32030402c1bf1ef0cee63b6006215c"} Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.865834 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a64c1a7342a50d29359dd156a55a3109ac32030402c1bf1ef0cee63b6006215c" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.869110 4982 generic.go:334] "Generic (PLEG): container finished" podID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerID="c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33" exitCode=0 Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.869143 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-l4q77" event={"ID":"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e","Type":"ContainerDied","Data":"c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33"} Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.869165 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-l4q77" event={"ID":"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e","Type":"ContainerStarted","Data":"2e536db7fcea3044d0f7a3458a49a98bb457273706f94c4419bfb2853be1a562"} Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.892316 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76d579d464-xmhp9" podStartSLOduration=1.892295942 podStartE2EDuration="1.892295942s" podCreationTimestamp="2026-01-23 08:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:34.879779258 +0000 UTC m=+1329.360142674" watchObservedRunningTime="2026-01-23 08:17:34.892295942 +0000 UTC m=+1329.372659328" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.897289 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.967590 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-sg-core-conf-yaml\") pod \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.967666 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-run-httpd\") pod \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.967769 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-config-data\") pod \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.967803 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkgq\" (UniqueName: \"kubernetes.io/projected/be893495-d5e8-4fb0-9d5d-16e26954e2aa-kube-api-access-gdkgq\") pod \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.967852 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-combined-ca-bundle\") pod \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.967905 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-scripts\") pod \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.967961 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-log-httpd\") pod \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\" (UID: \"be893495-d5e8-4fb0-9d5d-16e26954e2aa\") " Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.970889 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be893495-d5e8-4fb0-9d5d-16e26954e2aa" (UID: "be893495-d5e8-4fb0-9d5d-16e26954e2aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.978143 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be893495-d5e8-4fb0-9d5d-16e26954e2aa" (UID: "be893495-d5e8-4fb0-9d5d-16e26954e2aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:17:34 crc kubenswrapper[4982]: I0123 08:17:34.983683 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-scripts" (OuterVolumeSpecName: "scripts") pod "be893495-d5e8-4fb0-9d5d-16e26954e2aa" (UID: "be893495-d5e8-4fb0-9d5d-16e26954e2aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.000278 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be893495-d5e8-4fb0-9d5d-16e26954e2aa-kube-api-access-gdkgq" (OuterVolumeSpecName: "kube-api-access-gdkgq") pod "be893495-d5e8-4fb0-9d5d-16e26954e2aa" (UID: "be893495-d5e8-4fb0-9d5d-16e26954e2aa"). InnerVolumeSpecName "kube-api-access-gdkgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.033915 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be893495-d5e8-4fb0-9d5d-16e26954e2aa" (UID: "be893495-d5e8-4fb0-9d5d-16e26954e2aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.092834 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdkgq\" (UniqueName: \"kubernetes.io/projected/be893495-d5e8-4fb0-9d5d-16e26954e2aa-kube-api-access-gdkgq\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.092879 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.092892 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.092904 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.092914 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be893495-d5e8-4fb0-9d5d-16e26954e2aa-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.142747 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6959c78bf5-trn2r"] Jan 23 08:17:35 crc kubenswrapper[4982]: E0123 08:17:35.143325 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-notification-agent" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143347 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-notification-agent" Jan 23 08:17:35 crc kubenswrapper[4982]: E0123 08:17:35.143380 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87959597-4d4c-4063-9bcc-a91c597281f6" containerName="barbican-db-sync" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143386 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="87959597-4d4c-4063-9bcc-a91c597281f6" containerName="barbican-db-sync" Jan 23 08:17:35 crc kubenswrapper[4982]: E0123 08:17:35.143397 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="proxy-httpd" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143404 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="proxy-httpd" Jan 23 08:17:35 crc kubenswrapper[4982]: E0123 08:17:35.143414 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="sg-core" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143419 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="sg-core" Jan 23 08:17:35 crc kubenswrapper[4982]: E0123 08:17:35.143432 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-central-agent" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143437 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-central-agent" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143683 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="proxy-httpd" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143696 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-central-agent" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143704 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="sg-core" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143718 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" containerName="ceilometer-notification-agent" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.143725 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="87959597-4d4c-4063-9bcc-a91c597281f6" containerName="barbican-db-sync" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.144892 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.160566 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.161245 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9s4bz" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.167758 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6959c78bf5-trn2r"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.169848 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.233880 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be893495-d5e8-4fb0-9d5d-16e26954e2aa" (UID: "be893495-d5e8-4fb0-9d5d-16e26954e2aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.258032 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.260311 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.264904 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.306540 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.315516 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.315564 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49f67d9-be69-448a-9f50-cf2801ed1528-logs\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.315680 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8kc6\" (UniqueName: \"kubernetes.io/projected/c49f67d9-be69-448a-9f50-cf2801ed1528-kube-api-access-r8kc6\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.315818 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data-custom\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.315873 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-combined-ca-bundle\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.316029 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.348718 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-l4q77"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.366043 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-config-data" (OuterVolumeSpecName: "config-data") pod "be893495-d5e8-4fb0-9d5d-16e26954e2aa" (UID: "be893495-d5e8-4fb0-9d5d-16e26954e2aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.418678 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-95sdf"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.420951 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.422065 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8vb\" (UniqueName: \"kubernetes.io/projected/be86b806-f778-4826-8cfa-c29d366c9af5-kube-api-access-dt8vb\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.422891 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data-custom\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.423062 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-combined-ca-bundle\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.423194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-combined-ca-bundle\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.423284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be86b806-f778-4826-8cfa-c29d366c9af5-logs\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.423511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.424409 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.424657 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49f67d9-be69-448a-9f50-cf2801ed1528-logs\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.428819 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49f67d9-be69-448a-9f50-cf2801ed1528-logs\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.431836 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data-custom\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.432910 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-combined-ca-bundle\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.433023 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.433249 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8kc6\" (UniqueName: \"kubernetes.io/projected/c49f67d9-be69-448a-9f50-cf2801ed1528-kube-api-access-r8kc6\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.433382 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data-custom\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.433565 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be893495-d5e8-4fb0-9d5d-16e26954e2aa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.439784 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d49874b88-l4b2l"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.441494 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.447087 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.456705 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-95sdf"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.460327 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8kc6\" (UniqueName: \"kubernetes.io/projected/c49f67d9-be69-448a-9f50-cf2801ed1528-kube-api-access-r8kc6\") pod \"barbican-worker-6959c78bf5-trn2r\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.477076 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d49874b88-l4b2l"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534639 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8vb\" (UniqueName: \"kubernetes.io/projected/be86b806-f778-4826-8cfa-c29d366c9af5-kube-api-access-dt8vb\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534690 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534739 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534758 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534792 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-combined-ca-bundle\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534817 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be86b806-f778-4826-8cfa-c29d366c9af5-logs\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534870 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534931 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-config\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.534987 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbf5\" (UniqueName: \"kubernetes.io/projected/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-kube-api-access-cqbf5\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.535035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data-custom\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.535058 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.535527 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be86b806-f778-4826-8cfa-c29d366c9af5-logs\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.539406 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-combined-ca-bundle\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.542369 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data-custom\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.543090 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.557285 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8vb\" (UniqueName: \"kubernetes.io/projected/be86b806-f778-4826-8cfa-c29d366c9af5-kube-api-access-dt8vb\") pod \"barbican-keystone-listener-7bf8bcfb96-bjw9x\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.606990 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.637388 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-logs\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.637450 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgc6t\" (UniqueName: \"kubernetes.io/projected/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-kube-api-access-xgc6t\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.637502 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data-custom\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.637818 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.638329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-config\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.638388 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-combined-ca-bundle\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.638455 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqbf5\" (UniqueName: \"kubernetes.io/projected/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-kube-api-access-cqbf5\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.638527 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.638665 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.638738 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.638769 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.639018 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.639166 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-config\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.639391 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.640064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.640171 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.640359 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.657883 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqbf5\" (UniqueName: \"kubernetes.io/projected/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-kube-api-access-cqbf5\") pod \"dnsmasq-dns-66cdd4b5b5-95sdf\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.741968 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-logs\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.742296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgc6t\" (UniqueName: \"kubernetes.io/projected/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-kube-api-access-xgc6t\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.742335 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data-custom\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.742372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-combined-ca-bundle\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.742448 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-logs\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.742467 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.750504 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data-custom\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.751362 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-combined-ca-bundle\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.755910 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.761974 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgc6t\" (UniqueName: \"kubernetes.io/projected/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-kube-api-access-xgc6t\") pod \"barbican-api-7d49874b88-l4b2l\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.773950 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.793220 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.914471 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685444497c-l4q77" podUID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerName="dnsmasq-dns" containerID="cri-o://1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea" gracePeriod=10 Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.914801 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-l4q77" event={"ID":"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e","Type":"ContainerStarted","Data":"1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea"} Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.914852 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.928596 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.929733 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be893495-d5e8-4fb0-9d5d-16e26954e2aa","Type":"ContainerDied","Data":"e2ccdba3268f9d4aa2eb99275c2a6fd55c9c2d191b3a098c7a25926811979523"} Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.935846 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6959c78bf5-trn2r"] Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.935884 4982 scope.go:117] "RemoveContainer" containerID="f7c8dac825b45347f3f3ab6bcae8f70e8c3dcb81a985f0632189119fac68503e" Jan 23 08:17:35 crc kubenswrapper[4982]: I0123 08:17:35.973574 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685444497c-l4q77" podStartSLOduration=3.973549723 podStartE2EDuration="3.973549723s" podCreationTimestamp="2026-01-23 08:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:35.959059076 +0000 UTC m=+1330.439422472" watchObservedRunningTime="2026-01-23 08:17:35.973549723 +0000 UTC m=+1330.453913109" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.011167 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f7c478fb9-56vcd"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.014050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.017438 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.019000 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.051724 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f7c478fb9-56vcd"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.077863 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrmn\" (UniqueName: \"kubernetes.io/projected/5448f946-ae45-4ef2-84fb-d848a335c81e-kube-api-access-mgrmn\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.078023 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-public-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.078078 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-internal-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.078136 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-combined-ca-bundle\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.078160 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-ovndb-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.078187 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-config\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.078201 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-httpd-config\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.182219 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-public-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.182280 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-internal-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.182345 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-combined-ca-bundle\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.182367 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-ovndb-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.182403 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-config\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.182425 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-httpd-config\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.182472 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgrmn\" (UniqueName: \"kubernetes.io/projected/5448f946-ae45-4ef2-84fb-d848a335c81e-kube-api-access-mgrmn\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.197309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-public-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.200193 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-config\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.203236 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-combined-ca-bundle\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.207310 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.213512 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-httpd-config\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.214932 4982 scope.go:117] "RemoveContainer" containerID="264c54476c21beb798630c02b0a4d711b7f3166029520a60ca97bb87331b40ca" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.220254 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-ovndb-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.220735 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgrmn\" (UniqueName: \"kubernetes.io/projected/5448f946-ae45-4ef2-84fb-d848a335c81e-kube-api-access-mgrmn\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.228548 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-internal-tls-certs\") pod \"neutron-6f7c478fb9-56vcd\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.232481 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.251069 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.272829 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.275642 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.285124 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.288371 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.297178 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.336208 4982 scope.go:117] "RemoveContainer" containerID="968f8e2d23ec66f5cd425c5baf51077bed00c2b524b2589454244a35d15140cc" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.374907 4982 scope.go:117] "RemoveContainer" containerID="d316e57d5485310b1a6f6f055dc8a387c24afba61c49d357ea7b7754dc65158c" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.394190 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-scripts\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.394316 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.394356 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-config-data\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.394432 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.394504 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-log-httpd\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.394577 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-run-httpd\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.394647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcjg4\" (UniqueName: \"kubernetes.io/projected/05941603-ac04-4d11-bf57-a2ad73b9b5bc-kube-api-access-rcjg4\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.423921 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d49874b88-l4b2l"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.477947 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.497991 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.498048 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-config-data\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.498097 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.498140 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-log-httpd\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.498179 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-run-httpd\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.498210 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcjg4\" (UniqueName: \"kubernetes.io/projected/05941603-ac04-4d11-bf57-a2ad73b9b5bc-kube-api-access-rcjg4\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.498303 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-scripts\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.499173 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-log-httpd\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.499456 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-run-httpd\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.504353 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-config-data\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.504941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.505693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.508712 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-scripts\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.532593 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcjg4\" (UniqueName: \"kubernetes.io/projected/05941603-ac04-4d11-bf57-a2ad73b9b5bc-kube-api-access-rcjg4\") pod \"ceilometer-0\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.572896 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.603122 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-svc\") pod \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.603550 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nk4q\" (UniqueName: \"kubernetes.io/projected/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-kube-api-access-4nk4q\") pod \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.603569 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-sb\") pod \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.603593 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-config\") pod \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.603653 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-nb\") pod \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.603856 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-swift-storage-0\") pod \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\" (UID: \"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e\") " Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.624262 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-kube-api-access-4nk4q" (OuterVolumeSpecName: "kube-api-access-4nk4q") pod "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" (UID: "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e"). InnerVolumeSpecName "kube-api-access-4nk4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.629852 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-95sdf"] Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.636236 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.705860 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nk4q\" (UniqueName: \"kubernetes.io/projected/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-kube-api-access-4nk4q\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.714091 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" (UID: "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.726230 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" (UID: "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.756774 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" (UID: "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.784516 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-config" (OuterVolumeSpecName: "config") pod "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" (UID: "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.786506 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" (UID: "1dfc8e7e-942f-478b-bec8-f4e6fdccde9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.808594 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.808671 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.808688 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.808699 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.808710 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.944284 4982 generic.go:334] "Generic (PLEG): container finished" podID="c680d8da-c090-4c7e-8068-baecb59402e8" containerID="03db8693dc2242a529ba2dd928929ed07a9fba357317ff119471a64da1e30cda" exitCode=0 Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.944340 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wwwj" event={"ID":"c680d8da-c090-4c7e-8068-baecb59402e8","Type":"ContainerDied","Data":"03db8693dc2242a529ba2dd928929ed07a9fba357317ff119471a64da1e30cda"} Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.949030 4982 generic.go:334] "Generic (PLEG): container finished" podID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerID="1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea" exitCode=0 Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.949079 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-l4q77" event={"ID":"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e","Type":"ContainerDied","Data":"1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea"} Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.949102 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-l4q77" event={"ID":"1dfc8e7e-942f-478b-bec8-f4e6fdccde9e","Type":"ContainerDied","Data":"2e536db7fcea3044d0f7a3458a49a98bb457273706f94c4419bfb2853be1a562"} Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.949118 4982 scope.go:117] "RemoveContainer" containerID="1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.949202 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-l4q77" Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.969175 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6959c78bf5-trn2r" event={"ID":"c49f67d9-be69-448a-9f50-cf2801ed1528","Type":"ContainerStarted","Data":"732b3f44177f3c0955253216add36e0d7f86738276e441897aed76bd5024a3ab"} Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.975819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" event={"ID":"be86b806-f778-4826-8cfa-c29d366c9af5","Type":"ContainerStarted","Data":"faadac4d00c6f41df194b210756886d2caa37bb189f8e5cdac68cffd6a9c3d86"} Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.979972 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d49874b88-l4b2l" event={"ID":"a1ee7b44-eb70-4f01-9d59-fa4c4856229a","Type":"ContainerStarted","Data":"d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945"} Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.980115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d49874b88-l4b2l" event={"ID":"a1ee7b44-eb70-4f01-9d59-fa4c4856229a","Type":"ContainerStarted","Data":"c5aa4a5ea69e49e3183e1a75e0efb573b34cd923cdc7729fc862e990338eaad2"} Jan 23 08:17:36 crc kubenswrapper[4982]: I0123 08:17:36.982365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" event={"ID":"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79","Type":"ContainerStarted","Data":"669c515a49073fabe78d9973b630a27ee4b62e2e367ff24f736937210563b022"} Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.137501 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f7c478fb9-56vcd"] Jan 23 08:17:37 crc kubenswrapper[4982]: W0123 08:17:37.183759 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5448f946_ae45_4ef2_84fb_d848a335c81e.slice/crio-3275681834a75fe5a667e386f502dc713649ea9cee55afc920468e7742a548b6 WatchSource:0}: Error finding container 3275681834a75fe5a667e386f502dc713649ea9cee55afc920468e7742a548b6: Status 404 returned error can't find the container with id 3275681834a75fe5a667e386f502dc713649ea9cee55afc920468e7742a548b6 Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.194048 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.355720 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-l4q77"] Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.382024 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-l4q77"] Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.843477 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" path="/var/lib/kubelet/pods/1dfc8e7e-942f-478b-bec8-f4e6fdccde9e/volumes" Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.844744 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be893495-d5e8-4fb0-9d5d-16e26954e2aa" path="/var/lib/kubelet/pods/be893495-d5e8-4fb0-9d5d-16e26954e2aa/volumes" Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.997307 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerID="f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7" exitCode=0 Jan 23 08:17:37 crc kubenswrapper[4982]: I0123 08:17:37.997405 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" event={"ID":"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79","Type":"ContainerDied","Data":"f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7"} Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.000043 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c478fb9-56vcd" event={"ID":"5448f946-ae45-4ef2-84fb-d848a335c81e","Type":"ContainerStarted","Data":"3275681834a75fe5a667e386f502dc713649ea9cee55afc920468e7742a548b6"} Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.002088 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerStarted","Data":"2617f0b350f534f8fbc2be99af4d9786845379874798a710f17ba71f36408618"} Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.003915 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d49874b88-l4b2l" event={"ID":"a1ee7b44-eb70-4f01-9d59-fa4c4856229a","Type":"ContainerStarted","Data":"c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa"} Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.026111 4982 scope.go:117] "RemoveContainer" containerID="c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.033458 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d49874b88-l4b2l" podStartSLOduration=3.033437003 podStartE2EDuration="3.033437003s" podCreationTimestamp="2026-01-23 08:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:38.027914345 +0000 UTC m=+1332.508277741" watchObservedRunningTime="2026-01-23 08:17:38.033437003 +0000 UTC m=+1332.513800389" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.297600 4982 scope.go:117] "RemoveContainer" containerID="1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea" Jan 23 08:17:38 crc kubenswrapper[4982]: E0123 08:17:38.298556 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea\": container with ID starting with 1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea not found: ID does not exist" containerID="1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.298609 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea"} err="failed to get container status \"1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea\": rpc error: code = NotFound desc = could not find container \"1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea\": container with ID starting with 1157b1a4c81f989a7c92d6149544de1da244d248378a66079d824559d25698ea not found: ID does not exist" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.298675 4982 scope.go:117] "RemoveContainer" containerID="c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33" Jan 23 08:17:38 crc kubenswrapper[4982]: E0123 08:17:38.299277 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33\": container with ID starting with c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33 not found: ID does not exist" containerID="c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.299299 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33"} err="failed to get container status \"c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33\": rpc error: code = NotFound desc = could not find container \"c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33\": container with ID starting with c5165d77692807486419a4d96b4bd74963c5b41ff9752ef5cb52ec6c0e9e2d33 not found: ID does not exist" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.341770 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.441447 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp9hr\" (UniqueName: \"kubernetes.io/projected/c680d8da-c090-4c7e-8068-baecb59402e8-kube-api-access-rp9hr\") pod \"c680d8da-c090-4c7e-8068-baecb59402e8\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.441503 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-config-data\") pod \"c680d8da-c090-4c7e-8068-baecb59402e8\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.441677 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-db-sync-config-data\") pod \"c680d8da-c090-4c7e-8068-baecb59402e8\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.441761 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-combined-ca-bundle\") pod \"c680d8da-c090-4c7e-8068-baecb59402e8\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.441838 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c680d8da-c090-4c7e-8068-baecb59402e8-etc-machine-id\") pod \"c680d8da-c090-4c7e-8068-baecb59402e8\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.441916 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-scripts\") pod \"c680d8da-c090-4c7e-8068-baecb59402e8\" (UID: \"c680d8da-c090-4c7e-8068-baecb59402e8\") " Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.443156 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c680d8da-c090-4c7e-8068-baecb59402e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c680d8da-c090-4c7e-8068-baecb59402e8" (UID: "c680d8da-c090-4c7e-8068-baecb59402e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.449892 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c680d8da-c090-4c7e-8068-baecb59402e8-kube-api-access-rp9hr" (OuterVolumeSpecName: "kube-api-access-rp9hr") pod "c680d8da-c090-4c7e-8068-baecb59402e8" (UID: "c680d8da-c090-4c7e-8068-baecb59402e8"). InnerVolumeSpecName "kube-api-access-rp9hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.452384 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-scripts" (OuterVolumeSpecName: "scripts") pod "c680d8da-c090-4c7e-8068-baecb59402e8" (UID: "c680d8da-c090-4c7e-8068-baecb59402e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.454363 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c680d8da-c090-4c7e-8068-baecb59402e8" (UID: "c680d8da-c090-4c7e-8068-baecb59402e8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.515480 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c680d8da-c090-4c7e-8068-baecb59402e8" (UID: "c680d8da-c090-4c7e-8068-baecb59402e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.544722 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.545101 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.545199 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c680d8da-c090-4c7e-8068-baecb59402e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.545279 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.545375 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp9hr\" (UniqueName: \"kubernetes.io/projected/c680d8da-c090-4c7e-8068-baecb59402e8-kube-api-access-rp9hr\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.559766 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-config-data" (OuterVolumeSpecName: "config-data") pod "c680d8da-c090-4c7e-8068-baecb59402e8" (UID: "c680d8da-c090-4c7e-8068-baecb59402e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:38 crc kubenswrapper[4982]: I0123 08:17:38.647304 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c680d8da-c090-4c7e-8068-baecb59402e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.014177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wwwj" event={"ID":"c680d8da-c090-4c7e-8068-baecb59402e8","Type":"ContainerDied","Data":"6aa1b2e33da844c29a331edad2ef6922ba960cf4851b672ceac249544d1bc8e9"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.014204 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wwwj" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.014219 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aa1b2e33da844c29a331edad2ef6922ba960cf4851b672ceac249544d1bc8e9" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.017231 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6959c78bf5-trn2r" event={"ID":"c49f67d9-be69-448a-9f50-cf2801ed1528","Type":"ContainerStarted","Data":"bd096c9a64adc5e24222bd520105f18c192474be440d5f8370a222b01b9f8c41"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.017277 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6959c78bf5-trn2r" event={"ID":"c49f67d9-be69-448a-9f50-cf2801ed1528","Type":"ContainerStarted","Data":"db343ad642c9cc837f707a4e56a8a8c708cc25fec21d7f114d380fb4ad3f1acf"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.020425 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" event={"ID":"be86b806-f778-4826-8cfa-c29d366c9af5","Type":"ContainerStarted","Data":"121ac015dcf20bdb873bb6bfe8f528431aaf6cd7e2e7cfb1ebc3aff798e014ee"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.020458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" event={"ID":"be86b806-f778-4826-8cfa-c29d366c9af5","Type":"ContainerStarted","Data":"74a7b42a19564dc748ab37edddb7525e4b1a173e9cc8b1eb1fb22abc814887a5"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.023979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c478fb9-56vcd" event={"ID":"5448f946-ae45-4ef2-84fb-d848a335c81e","Type":"ContainerStarted","Data":"447acfd4c048d55e1f08058b4c91272ee9f4e8985a1b8beb44df17770342a816"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.024035 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c478fb9-56vcd" event={"ID":"5448f946-ae45-4ef2-84fb-d848a335c81e","Type":"ContainerStarted","Data":"af380f9803b5a0cc4d7580e7e89b720729e3c7d4ba623f631701258b3aa79272"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.024167 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.027832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerStarted","Data":"a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.031559 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" event={"ID":"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79","Type":"ContainerStarted","Data":"efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f"} Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.031593 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.031608 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.031650 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.046381 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6959c78bf5-trn2r" podStartSLOduration=2.030992328 podStartE2EDuration="4.046361469s" podCreationTimestamp="2026-01-23 08:17:35 +0000 UTC" firstStartedPulling="2026-01-23 08:17:36.052566033 +0000 UTC m=+1330.532929419" lastFinishedPulling="2026-01-23 08:17:38.067935174 +0000 UTC m=+1332.548298560" observedRunningTime="2026-01-23 08:17:39.044926111 +0000 UTC m=+1333.525289497" watchObservedRunningTime="2026-01-23 08:17:39.046361469 +0000 UTC m=+1333.526724855" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.071863 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" podStartSLOduration=4.071834779 podStartE2EDuration="4.071834779s" podCreationTimestamp="2026-01-23 08:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:39.06436214 +0000 UTC m=+1333.544725526" watchObservedRunningTime="2026-01-23 08:17:39.071834779 +0000 UTC m=+1333.552198165" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.099168 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f7c478fb9-56vcd" podStartSLOduration=4.099141478 podStartE2EDuration="4.099141478s" podCreationTimestamp="2026-01-23 08:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:39.093852387 +0000 UTC m=+1333.574215773" watchObservedRunningTime="2026-01-23 08:17:39.099141478 +0000 UTC m=+1333.579504864" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.292838 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" podStartSLOduration=2.444519729 podStartE2EDuration="4.292808859s" podCreationTimestamp="2026-01-23 08:17:35 +0000 UTC" firstStartedPulling="2026-01-23 08:17:36.235906268 +0000 UTC m=+1330.716269654" lastFinishedPulling="2026-01-23 08:17:38.084195398 +0000 UTC m=+1332.564558784" observedRunningTime="2026-01-23 08:17:39.129024106 +0000 UTC m=+1333.609387502" watchObservedRunningTime="2026-01-23 08:17:39.292808859 +0000 UTC m=+1333.773172245" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.349735 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:39 crc kubenswrapper[4982]: E0123 08:17:39.350330 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerName="init" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.350349 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerName="init" Jan 23 08:17:39 crc kubenswrapper[4982]: E0123 08:17:39.350364 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerName="dnsmasq-dns" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.350371 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerName="dnsmasq-dns" Jan 23 08:17:39 crc kubenswrapper[4982]: E0123 08:17:39.350390 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c680d8da-c090-4c7e-8068-baecb59402e8" containerName="cinder-db-sync" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.350398 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c680d8da-c090-4c7e-8068-baecb59402e8" containerName="cinder-db-sync" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.350691 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfc8e7e-942f-478b-bec8-f4e6fdccde9e" containerName="dnsmasq-dns" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.350715 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c680d8da-c090-4c7e-8068-baecb59402e8" containerName="cinder-db-sync" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.352110 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.370809 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.371152 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-w2jsh" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.371296 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.371400 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.372407 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxhgm\" (UniqueName: \"kubernetes.io/projected/996028f4-9ce3-4978-95f9-5a49e177c593-kube-api-access-kxhgm\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.372493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/996028f4-9ce3-4978-95f9-5a49e177c593-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.372523 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.372554 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-scripts\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.372633 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.372655 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.390018 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.424451 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-95sdf"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.488758 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.488833 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-scripts\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.488934 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.488970 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.489031 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxhgm\" (UniqueName: \"kubernetes.io/projected/996028f4-9ce3-4978-95f9-5a49e177c593-kube-api-access-kxhgm\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.489113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/996028f4-9ce3-4978-95f9-5a49e177c593-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.489240 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/996028f4-9ce3-4978-95f9-5a49e177c593-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.513583 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.513936 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-m27gc"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.514138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.517222 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.549655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-scripts\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.564273 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.599767 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-m27gc"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.616679 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxhgm\" (UniqueName: \"kubernetes.io/projected/996028f4-9ce3-4978-95f9-5a49e177c593-kube-api-access-kxhgm\") pod \"cinder-scheduler-0\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.704999 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.705076 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbfbn\" (UniqueName: \"kubernetes.io/projected/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-kube-api-access-tbfbn\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.705101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.705147 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.705174 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-config\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.705231 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.707788 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7965657874-h2sgm"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.709655 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.715129 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.717587 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.730437 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7965657874-h2sgm"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.738460 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.754815 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.756939 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.760312 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.780515 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.808594 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809071 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-config\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809118 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzdzf\" (UniqueName: \"kubernetes.io/projected/f2b7b481-7320-4d03-a230-a173c66dae36-kube-api-access-kzdzf\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809140 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809173 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-public-tls-certs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809201 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809258 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809348 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbfbn\" (UniqueName: \"kubernetes.io/projected/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-kube-api-access-tbfbn\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809371 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809393 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data-custom\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809423 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-internal-tls-certs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809442 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-combined-ca-bundle\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.809472 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b7b481-7320-4d03-a230-a173c66dae36-logs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.811737 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.812133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.812222 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.812766 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-config\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.813188 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.849918 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbfbn\" (UniqueName: \"kubernetes.io/projected/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-kube-api-access-tbfbn\") pod \"dnsmasq-dns-75dbb546bf-m27gc\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.911680 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73b57458-6b5d-4798-9750-64d5c878e50f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.911729 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmzh\" (UniqueName: \"kubernetes.io/projected/73b57458-6b5d-4798-9750-64d5c878e50f-kube-api-access-wtmzh\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.911775 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzdzf\" (UniqueName: \"kubernetes.io/projected/f2b7b481-7320-4d03-a230-a173c66dae36-kube-api-access-kzdzf\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.911807 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.911834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data-custom\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.911869 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-public-tls-certs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.911897 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.912008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-scripts\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.912038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.912167 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data-custom\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.912229 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-internal-tls-certs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.912255 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-combined-ca-bundle\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.912302 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b7b481-7320-4d03-a230-a173c66dae36-logs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.912362 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73b57458-6b5d-4798-9750-64d5c878e50f-logs\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.919083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b7b481-7320-4d03-a230-a173c66dae36-logs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.922835 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-combined-ca-bundle\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.923793 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.929190 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-internal-tls-certs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.949159 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-public-tls-certs\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.953950 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzdzf\" (UniqueName: \"kubernetes.io/projected/f2b7b481-7320-4d03-a230-a173c66dae36-kube-api-access-kzdzf\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:39 crc kubenswrapper[4982]: I0123 08:17:39.966394 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data-custom\") pod \"barbican-api-7965657874-h2sgm\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.017556 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.017683 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-scripts\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.017719 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.017846 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73b57458-6b5d-4798-9750-64d5c878e50f-logs\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.017878 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73b57458-6b5d-4798-9750-64d5c878e50f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.017900 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtmzh\" (UniqueName: \"kubernetes.io/projected/73b57458-6b5d-4798-9750-64d5c878e50f-kube-api-access-wtmzh\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.017943 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data-custom\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.026706 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73b57458-6b5d-4798-9750-64d5c878e50f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.027026 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73b57458-6b5d-4798-9750-64d5c878e50f-logs\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.033821 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data-custom\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.038064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.044741 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-scripts\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.058504 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.066761 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.074825 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.084198 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtmzh\" (UniqueName: \"kubernetes.io/projected/73b57458-6b5d-4798-9750-64d5c878e50f-kube-api-access-wtmzh\") pod \"cinder-api-0\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.105150 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.342686 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:40 crc kubenswrapper[4982]: W0123 08:17:40.347969 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod996028f4_9ce3_4978_95f9_5a49e177c593.slice/crio-87c86d595e598d110d86476075fe949a18fcbf2b2e06cfb5a0f3b4978759d737 WatchSource:0}: Error finding container 87c86d595e598d110d86476075fe949a18fcbf2b2e06cfb5a0f3b4978759d737: Status 404 returned error can't find the container with id 87c86d595e598d110d86476075fe949a18fcbf2b2e06cfb5a0f3b4978759d737 Jan 23 08:17:40 crc kubenswrapper[4982]: W0123 08:17:40.915932 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod792dd2a0_6d18_440d_becb_f3a3f1ff7fc9.slice/crio-d723ab3c106ddd0fa24e1ad6801766000e485b5409ee10e0e7b014c5535129c3 WatchSource:0}: Error finding container d723ab3c106ddd0fa24e1ad6801766000e485b5409ee10e0e7b014c5535129c3: Status 404 returned error can't find the container with id d723ab3c106ddd0fa24e1ad6801766000e485b5409ee10e0e7b014c5535129c3 Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.931301 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-m27gc"] Jan 23 08:17:40 crc kubenswrapper[4982]: W0123 08:17:40.935451 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73b57458_6b5d_4798_9750_64d5c878e50f.slice/crio-9c27342e279c7ff807c2ee98ef44a385c1f1adf8961abe31a8beabbe0d2a2a3c WatchSource:0}: Error finding container 9c27342e279c7ff807c2ee98ef44a385c1f1adf8961abe31a8beabbe0d2a2a3c: Status 404 returned error can't find the container with id 9c27342e279c7ff807c2ee98ef44a385c1f1adf8961abe31a8beabbe0d2a2a3c Jan 23 08:17:40 crc kubenswrapper[4982]: I0123 08:17:40.960831 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.044013 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7965657874-h2sgm"] Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.090772 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"996028f4-9ce3-4978-95f9-5a49e177c593","Type":"ContainerStarted","Data":"87c86d595e598d110d86476075fe949a18fcbf2b2e06cfb5a0f3b4978759d737"} Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.101496 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7965657874-h2sgm" event={"ID":"f2b7b481-7320-4d03-a230-a173c66dae36","Type":"ContainerStarted","Data":"4db0ea8fe1956c0c18296c433c11a6562155392ffb15777f4d99c6e42d090bb1"} Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.103669 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerStarted","Data":"b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f"} Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.118872 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" event={"ID":"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9","Type":"ContainerStarted","Data":"d723ab3c106ddd0fa24e1ad6801766000e485b5409ee10e0e7b014c5535129c3"} Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.131956 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" podUID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerName="dnsmasq-dns" containerID="cri-o://efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f" gracePeriod=10 Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.132282 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73b57458-6b5d-4798-9750-64d5c878e50f","Type":"ContainerStarted","Data":"9c27342e279c7ff807c2ee98ef44a385c1f1adf8961abe31a8beabbe0d2a2a3c"} Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.632355 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.802387 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqbf5\" (UniqueName: \"kubernetes.io/projected/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-kube-api-access-cqbf5\") pod \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.802524 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-nb\") pod \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.802610 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-sb\") pod \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.802668 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-swift-storage-0\") pod \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.802732 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-config\") pod \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.802856 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-svc\") pod \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\" (UID: \"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79\") " Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.816559 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-kube-api-access-cqbf5" (OuterVolumeSpecName: "kube-api-access-cqbf5") pod "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" (UID: "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79"). InnerVolumeSpecName "kube-api-access-cqbf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.908252 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqbf5\" (UniqueName: \"kubernetes.io/projected/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-kube-api-access-cqbf5\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.950678 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" (UID: "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:41 crc kubenswrapper[4982]: I0123 08:17:41.983954 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" (UID: "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.010592 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.010648 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.018949 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" (UID: "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.032785 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-config" (OuterVolumeSpecName: "config") pod "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" (UID: "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.059556 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" (UID: "a9b916b8-a5b7-4bc1-a784-1bef8b79aa79"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.113496 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.113544 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.113568 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.152091 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7965657874-h2sgm" event={"ID":"f2b7b481-7320-4d03-a230-a173c66dae36","Type":"ContainerStarted","Data":"44123210e26061b640cdcc4b3b9dc1005507bd1badfae509c93d9f329c1f4807"} Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.156053 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerStarted","Data":"6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412"} Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.160358 4982 generic.go:334] "Generic (PLEG): container finished" podID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerID="6d6c20a93d976a8163952bb3139ca0124fdfa3d7846d67b2af3e18e272c20542" exitCode=0 Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.160415 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" event={"ID":"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9","Type":"ContainerDied","Data":"6d6c20a93d976a8163952bb3139ca0124fdfa3d7846d67b2af3e18e272c20542"} Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.201601 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerID="efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f" exitCode=0 Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.202145 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.203435 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" event={"ID":"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79","Type":"ContainerDied","Data":"efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f"} Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.203514 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-95sdf" event={"ID":"a9b916b8-a5b7-4bc1-a784-1bef8b79aa79","Type":"ContainerDied","Data":"669c515a49073fabe78d9973b630a27ee4b62e2e367ff24f736937210563b022"} Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.203537 4982 scope.go:117] "RemoveContainer" containerID="efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.228011 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73b57458-6b5d-4798-9750-64d5c878e50f","Type":"ContainerStarted","Data":"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a"} Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.370896 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-95sdf"] Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.398318 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-95sdf"] Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.407901 4982 scope.go:117] "RemoveContainer" containerID="f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.516127 4982 scope.go:117] "RemoveContainer" containerID="efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f" Jan 23 08:17:42 crc kubenswrapper[4982]: E0123 08:17:42.517773 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f\": container with ID starting with efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f not found: ID does not exist" containerID="efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.517819 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f"} err="failed to get container status \"efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f\": rpc error: code = NotFound desc = could not find container \"efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f\": container with ID starting with efc3d0424d6b04432157bcd10cf585ff006e2a20a8af9163bc8cdc8b9a969f3f not found: ID does not exist" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.517851 4982 scope.go:117] "RemoveContainer" containerID="f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7" Jan 23 08:17:42 crc kubenswrapper[4982]: E0123 08:17:42.518466 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7\": container with ID starting with f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7 not found: ID does not exist" containerID="f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7" Jan 23 08:17:42 crc kubenswrapper[4982]: I0123 08:17:42.518498 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7"} err="failed to get container status \"f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7\": rpc error: code = NotFound desc = could not find container \"f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7\": container with ID starting with f3e7c2cecc805fffa9cb39cfb5968a981f3fddfd344cf2fa64c134a77a1a24c7 not found: ID does not exist" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.112649 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.244075 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7965657874-h2sgm" event={"ID":"f2b7b481-7320-4d03-a230-a173c66dae36","Type":"ContainerStarted","Data":"cf9c6471bb4e01bd9fd153522ac53f92eade7e364f44fd40abf8bdcad7b61af8"} Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.245607 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.245654 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.247202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" event={"ID":"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9","Type":"ContainerStarted","Data":"7816d0163bf0a0ad4da5b64c9c34faf8fd633daf3ed9dbdf0208d63d147f6090"} Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.247335 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.266594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73b57458-6b5d-4798-9750-64d5c878e50f","Type":"ContainerStarted","Data":"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2"} Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.266650 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.284325 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7965657874-h2sgm" podStartSLOduration=4.284303985 podStartE2EDuration="4.284303985s" podCreationTimestamp="2026-01-23 08:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:43.27211406 +0000 UTC m=+1337.752477436" watchObservedRunningTime="2026-01-23 08:17:43.284303985 +0000 UTC m=+1337.764667371" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.294538 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"996028f4-9ce3-4978-95f9-5a49e177c593","Type":"ContainerStarted","Data":"1668dacd607707ea67f279b1e26aa82215a52c541a768c6d1aba82d9e25f6cd3"} Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.298292 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" podStartSLOduration=4.298275288 podStartE2EDuration="4.298275288s" podCreationTimestamp="2026-01-23 08:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:43.296131041 +0000 UTC m=+1337.776494427" watchObservedRunningTime="2026-01-23 08:17:43.298275288 +0000 UTC m=+1337.778638674" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.323091 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.32306647 podStartE2EDuration="4.32306647s" podCreationTimestamp="2026-01-23 08:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:43.318164619 +0000 UTC m=+1337.798528005" watchObservedRunningTime="2026-01-23 08:17:43.32306647 +0000 UTC m=+1337.803429876" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.845964 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" path="/var/lib/kubelet/pods/a9b916b8-a5b7-4bc1-a784-1bef8b79aa79/volumes" Jan 23 08:17:43 crc kubenswrapper[4982]: I0123 08:17:43.925321 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:44 crc kubenswrapper[4982]: I0123 08:17:44.306320 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"996028f4-9ce3-4978-95f9-5a49e177c593","Type":"ContainerStarted","Data":"e5ca03806bb1b5ba5583adb335bdec01b84ce903d9b6c00cb1a83855d2d2f7d6"} Jan 23 08:17:44 crc kubenswrapper[4982]: I0123 08:17:44.312215 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerStarted","Data":"e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a"} Jan 23 08:17:44 crc kubenswrapper[4982]: I0123 08:17:44.312265 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 08:17:44 crc kubenswrapper[4982]: I0123 08:17:44.344068 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.293286866 podStartE2EDuration="5.344047122s" podCreationTimestamp="2026-01-23 08:17:39 +0000 UTC" firstStartedPulling="2026-01-23 08:17:40.350231164 +0000 UTC m=+1334.830594550" lastFinishedPulling="2026-01-23 08:17:41.40099142 +0000 UTC m=+1335.881354806" observedRunningTime="2026-01-23 08:17:44.334980889 +0000 UTC m=+1338.815344275" watchObservedRunningTime="2026-01-23 08:17:44.344047122 +0000 UTC m=+1338.824410508" Jan 23 08:17:44 crc kubenswrapper[4982]: I0123 08:17:44.368518 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.077266645 podStartE2EDuration="8.368491324s" podCreationTimestamp="2026-01-23 08:17:36 +0000 UTC" firstStartedPulling="2026-01-23 08:17:37.617481507 +0000 UTC m=+1332.097844903" lastFinishedPulling="2026-01-23 08:17:42.908706196 +0000 UTC m=+1337.389069582" observedRunningTime="2026-01-23 08:17:44.360965793 +0000 UTC m=+1338.841329219" watchObservedRunningTime="2026-01-23 08:17:44.368491324 +0000 UTC m=+1338.848854710" Jan 23 08:17:44 crc kubenswrapper[4982]: I0123 08:17:44.738728 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 08:17:45 crc kubenswrapper[4982]: I0123 08:17:45.022935 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:45 crc kubenswrapper[4982]: I0123 08:17:45.330254 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api-log" containerID="cri-o://69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a" gracePeriod=30 Jan 23 08:17:45 crc kubenswrapper[4982]: I0123 08:17:45.331766 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api" containerID="cri-o://e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2" gracePeriod=30 Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.015327 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184414 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73b57458-6b5d-4798-9750-64d5c878e50f-etc-machine-id\") pod \"73b57458-6b5d-4798-9750-64d5c878e50f\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184545 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtmzh\" (UniqueName: \"kubernetes.io/projected/73b57458-6b5d-4798-9750-64d5c878e50f-kube-api-access-wtmzh\") pod \"73b57458-6b5d-4798-9750-64d5c878e50f\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184536 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73b57458-6b5d-4798-9750-64d5c878e50f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73b57458-6b5d-4798-9750-64d5c878e50f" (UID: "73b57458-6b5d-4798-9750-64d5c878e50f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184612 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data\") pod \"73b57458-6b5d-4798-9750-64d5c878e50f\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184737 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data-custom\") pod \"73b57458-6b5d-4798-9750-64d5c878e50f\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184760 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-scripts\") pod \"73b57458-6b5d-4798-9750-64d5c878e50f\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184796 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-combined-ca-bundle\") pod \"73b57458-6b5d-4798-9750-64d5c878e50f\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.184846 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73b57458-6b5d-4798-9750-64d5c878e50f-logs\") pod \"73b57458-6b5d-4798-9750-64d5c878e50f\" (UID: \"73b57458-6b5d-4798-9750-64d5c878e50f\") " Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.185374 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73b57458-6b5d-4798-9750-64d5c878e50f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.185640 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b57458-6b5d-4798-9750-64d5c878e50f-logs" (OuterVolumeSpecName: "logs") pod "73b57458-6b5d-4798-9750-64d5c878e50f" (UID: "73b57458-6b5d-4798-9750-64d5c878e50f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.193028 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b57458-6b5d-4798-9750-64d5c878e50f-kube-api-access-wtmzh" (OuterVolumeSpecName: "kube-api-access-wtmzh") pod "73b57458-6b5d-4798-9750-64d5c878e50f" (UID: "73b57458-6b5d-4798-9750-64d5c878e50f"). InnerVolumeSpecName "kube-api-access-wtmzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.193054 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-scripts" (OuterVolumeSpecName: "scripts") pod "73b57458-6b5d-4798-9750-64d5c878e50f" (UID: "73b57458-6b5d-4798-9750-64d5c878e50f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.210431 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "73b57458-6b5d-4798-9750-64d5c878e50f" (UID: "73b57458-6b5d-4798-9750-64d5c878e50f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.221765 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73b57458-6b5d-4798-9750-64d5c878e50f" (UID: "73b57458-6b5d-4798-9750-64d5c878e50f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.267061 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data" (OuterVolumeSpecName: "config-data") pod "73b57458-6b5d-4798-9750-64d5c878e50f" (UID: "73b57458-6b5d-4798-9750-64d5c878e50f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.287938 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.287977 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.287989 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.288000 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73b57458-6b5d-4798-9750-64d5c878e50f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.288011 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtmzh\" (UniqueName: \"kubernetes.io/projected/73b57458-6b5d-4798-9750-64d5c878e50f-kube-api-access-wtmzh\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.288023 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73b57458-6b5d-4798-9750-64d5c878e50f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.344027 4982 generic.go:334] "Generic (PLEG): container finished" podID="73b57458-6b5d-4798-9750-64d5c878e50f" containerID="e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2" exitCode=0 Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.344082 4982 generic.go:334] "Generic (PLEG): container finished" podID="73b57458-6b5d-4798-9750-64d5c878e50f" containerID="69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a" exitCode=143 Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.344099 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.344169 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73b57458-6b5d-4798-9750-64d5c878e50f","Type":"ContainerDied","Data":"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2"} Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.344204 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73b57458-6b5d-4798-9750-64d5c878e50f","Type":"ContainerDied","Data":"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a"} Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.344220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"73b57458-6b5d-4798-9750-64d5c878e50f","Type":"ContainerDied","Data":"9c27342e279c7ff807c2ee98ef44a385c1f1adf8961abe31a8beabbe0d2a2a3c"} Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.344239 4982 scope.go:117] "RemoveContainer" containerID="e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.404831 4982 scope.go:117] "RemoveContainer" containerID="69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.412555 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.428948 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.447211 4982 scope.go:117] "RemoveContainer" containerID="e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2" Jan 23 08:17:46 crc kubenswrapper[4982]: E0123 08:17:46.449346 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2\": container with ID starting with e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2 not found: ID does not exist" containerID="e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.449384 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2"} err="failed to get container status \"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2\": rpc error: code = NotFound desc = could not find container \"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2\": container with ID starting with e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2 not found: ID does not exist" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.449407 4982 scope.go:117] "RemoveContainer" containerID="69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a" Jan 23 08:17:46 crc kubenswrapper[4982]: E0123 08:17:46.451187 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a\": container with ID starting with 69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a not found: ID does not exist" containerID="69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.451219 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a"} err="failed to get container status \"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a\": rpc error: code = NotFound desc = could not find container \"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a\": container with ID starting with 69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a not found: ID does not exist" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.451243 4982 scope.go:117] "RemoveContainer" containerID="e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.451546 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2"} err="failed to get container status \"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2\": rpc error: code = NotFound desc = could not find container \"e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2\": container with ID starting with e5bd97978fa3b061d0daf57c3f3ae095352c848ec2bfeef0825d3e71b7dd62a2 not found: ID does not exist" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.451563 4982 scope.go:117] "RemoveContainer" containerID="69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.451874 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a"} err="failed to get container status \"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a\": rpc error: code = NotFound desc = could not find container \"69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a\": container with ID starting with 69aac2f813994f2a1c0213007880ee451af30b2d3e1c54ad04aaa534ac7d291a not found: ID does not exist" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.473419 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:46 crc kubenswrapper[4982]: E0123 08:17:46.473991 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerName="dnsmasq-dns" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.474015 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerName="dnsmasq-dns" Jan 23 08:17:46 crc kubenswrapper[4982]: E0123 08:17:46.474031 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerName="init" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.474038 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerName="init" Jan 23 08:17:46 crc kubenswrapper[4982]: E0123 08:17:46.474049 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.474057 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api" Jan 23 08:17:46 crc kubenswrapper[4982]: E0123 08:17:46.474081 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api-log" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.474087 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api-log" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.474795 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api-log" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.474820 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" containerName="cinder-api" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.474832 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b916b8-a5b7-4bc1-a784-1bef8b79aa79" containerName="dnsmasq-dns" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.476225 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.481124 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.481280 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.481322 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.489772 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596307 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk96h\" (UniqueName: \"kubernetes.io/projected/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-kube-api-access-jk96h\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596374 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596422 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-scripts\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596671 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596697 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-logs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.596798 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.698544 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.698641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-logs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.698661 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.698730 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.699186 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.699284 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk96h\" (UniqueName: \"kubernetes.io/projected/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-kube-api-access-jk96h\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.699321 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.699346 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.699382 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-scripts\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.702195 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-logs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.702412 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.705782 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.707022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-scripts\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.707208 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.724846 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data-custom\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.725317 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.726269 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.735501 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk96h\" (UniqueName: \"kubernetes.io/projected/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-kube-api-access-jk96h\") pod \"cinder-api-0\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " pod="openstack/cinder-api-0" Jan 23 08:17:46 crc kubenswrapper[4982]: I0123 08:17:46.837798 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:17:47 crc kubenswrapper[4982]: I0123 08:17:47.432667 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:17:47 crc kubenswrapper[4982]: W0123 08:17:47.434926 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a8b0b53_6555_429b_a4d9_b0cb0b1e04cb.slice/crio-9e99d4ff06777ba54e1801d9213c9121fabcd2935a246d5e54b9a7f8ffbb75d2 WatchSource:0}: Error finding container 9e99d4ff06777ba54e1801d9213c9121fabcd2935a246d5e54b9a7f8ffbb75d2: Status 404 returned error can't find the container with id 9e99d4ff06777ba54e1801d9213c9121fabcd2935a246d5e54b9a7f8ffbb75d2 Jan 23 08:17:47 crc kubenswrapper[4982]: I0123 08:17:47.843202 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b57458-6b5d-4798-9750-64d5c878e50f" path="/var/lib/kubelet/pods/73b57458-6b5d-4798-9750-64d5c878e50f/volumes" Jan 23 08:17:48 crc kubenswrapper[4982]: I0123 08:17:48.370977 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb","Type":"ContainerStarted","Data":"9e99d4ff06777ba54e1801d9213c9121fabcd2935a246d5e54b9a7f8ffbb75d2"} Jan 23 08:17:49 crc kubenswrapper[4982]: I0123 08:17:49.396084 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb","Type":"ContainerStarted","Data":"0b613fdca688bd3035970b7dc6b487ea56dac750dadeb82a2a1db42250f517a7"} Jan 23 08:17:49 crc kubenswrapper[4982]: I0123 08:17:49.397799 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb","Type":"ContainerStarted","Data":"40ca415939a7e1e94f0012bb346eed19b90d1b52ec23f37e175fe94046754239"} Jan 23 08:17:49 crc kubenswrapper[4982]: I0123 08:17:49.397838 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 08:17:49 crc kubenswrapper[4982]: I0123 08:17:49.429793 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.429768103 podStartE2EDuration="3.429768103s" podCreationTimestamp="2026-01-23 08:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:49.422872519 +0000 UTC m=+1343.903235905" watchObservedRunningTime="2026-01-23 08:17:49.429768103 +0000 UTC m=+1343.910131499" Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.005306 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.068811 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.071286 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.211778 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-snh6x"] Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.212075 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" podUID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerName="dnsmasq-dns" containerID="cri-o://4f417dd4e55ae658952be9e4ba27cdadd79bbc44d62877a77b42a23640f293e9" gracePeriod=10 Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.431696 4982 generic.go:334] "Generic (PLEG): container finished" podID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerID="4f417dd4e55ae658952be9e4ba27cdadd79bbc44d62877a77b42a23640f293e9" exitCode=0 Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.431999 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="cinder-scheduler" containerID="cri-o://1668dacd607707ea67f279b1e26aa82215a52c541a768c6d1aba82d9e25f6cd3" gracePeriod=30 Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.432341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" event={"ID":"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8","Type":"ContainerDied","Data":"4f417dd4e55ae658952be9e4ba27cdadd79bbc44d62877a77b42a23640f293e9"} Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.432846 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="probe" containerID="cri-o://e5ca03806bb1b5ba5583adb335bdec01b84ce903d9b6c00cb1a83855d2d2f7d6" gracePeriod=30 Jan 23 08:17:50 crc kubenswrapper[4982]: I0123 08:17:50.859908 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.009364 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-nb\") pod \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.009449 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-sb\") pod \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.009502 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-swift-storage-0\") pod \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.009543 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-svc\") pod \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.009710 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrxpn\" (UniqueName: \"kubernetes.io/projected/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-kube-api-access-vrxpn\") pod \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.009797 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-config\") pod \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\" (UID: \"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8\") " Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.033842 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-kube-api-access-vrxpn" (OuterVolumeSpecName: "kube-api-access-vrxpn") pod "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" (UID: "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8"). InnerVolumeSpecName "kube-api-access-vrxpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.102663 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" (UID: "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.105542 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-config" (OuterVolumeSpecName: "config") pod "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" (UID: "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.112090 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrxpn\" (UniqueName: \"kubernetes.io/projected/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-kube-api-access-vrxpn\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.112126 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.112141 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.129780 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" (UID: "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.129990 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" (UID: "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.139339 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" (UID: "a25c1411-c0cb-4eb1-a7da-d5a6a91125c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.213833 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.214066 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.214136 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.445645 4982 generic.go:334] "Generic (PLEG): container finished" podID="996028f4-9ce3-4978-95f9-5a49e177c593" containerID="e5ca03806bb1b5ba5583adb335bdec01b84ce903d9b6c00cb1a83855d2d2f7d6" exitCode=0 Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.445707 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"996028f4-9ce3-4978-95f9-5a49e177c593","Type":"ContainerDied","Data":"e5ca03806bb1b5ba5583adb335bdec01b84ce903d9b6c00cb1a83855d2d2f7d6"} Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.447320 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" event={"ID":"a25c1411-c0cb-4eb1-a7da-d5a6a91125c8","Type":"ContainerDied","Data":"eac936fa19ba6d8dd50417595052eea38d99e06f1a2ab4b37cc9347a4a93fc19"} Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.447355 4982 scope.go:117] "RemoveContainer" containerID="4f417dd4e55ae658952be9e4ba27cdadd79bbc44d62877a77b42a23640f293e9" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.447482 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-snh6x" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.511493 4982 scope.go:117] "RemoveContainer" containerID="c2150f58239ac372fd377b4d3bbc84d469edffb22e213778f97549f263ab4312" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.516807 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-snh6x"] Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.571884 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-snh6x"] Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.782551 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:51 crc kubenswrapper[4982]: I0123 08:17:51.839809 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" path="/var/lib/kubelet/pods/a25c1411-c0cb-4eb1-a7da-d5a6a91125c8/volumes" Jan 23 08:17:52 crc kubenswrapper[4982]: I0123 08:17:52.061165 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:17:52 crc kubenswrapper[4982]: I0123 08:17:52.290411 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:52 crc kubenswrapper[4982]: I0123 08:17:52.441905 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:17:52 crc kubenswrapper[4982]: I0123 08:17:52.512639 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d49874b88-l4b2l"] Jan 23 08:17:52 crc kubenswrapper[4982]: I0123 08:17:52.512889 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d49874b88-l4b2l" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api-log" containerID="cri-o://d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945" gracePeriod=30 Jan 23 08:17:52 crc kubenswrapper[4982]: I0123 08:17:52.513337 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d49874b88-l4b2l" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api" containerID="cri-o://c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa" gracePeriod=30 Jan 23 08:17:52 crc kubenswrapper[4982]: I0123 08:17:52.996171 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:17:53 crc kubenswrapper[4982]: I0123 08:17:53.475376 4982 generic.go:334] "Generic (PLEG): container finished" podID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerID="d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945" exitCode=143 Jan 23 08:17:53 crc kubenswrapper[4982]: I0123 08:17:53.475428 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d49874b88-l4b2l" event={"ID":"a1ee7b44-eb70-4f01-9d59-fa4c4856229a","Type":"ContainerDied","Data":"d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945"} Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.492409 4982 generic.go:334] "Generic (PLEG): container finished" podID="996028f4-9ce3-4978-95f9-5a49e177c593" containerID="1668dacd607707ea67f279b1e26aa82215a52c541a768c6d1aba82d9e25f6cd3" exitCode=0 Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.492497 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"996028f4-9ce3-4978-95f9-5a49e177c593","Type":"ContainerDied","Data":"1668dacd607707ea67f279b1e26aa82215a52c541a768c6d1aba82d9e25f6cd3"} Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.795750 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.913549 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxhgm\" (UniqueName: \"kubernetes.io/projected/996028f4-9ce3-4978-95f9-5a49e177c593-kube-api-access-kxhgm\") pod \"996028f4-9ce3-4978-95f9-5a49e177c593\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.913632 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/996028f4-9ce3-4978-95f9-5a49e177c593-etc-machine-id\") pod \"996028f4-9ce3-4978-95f9-5a49e177c593\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.913672 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-scripts\") pod \"996028f4-9ce3-4978-95f9-5a49e177c593\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.913707 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/996028f4-9ce3-4978-95f9-5a49e177c593-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "996028f4-9ce3-4978-95f9-5a49e177c593" (UID: "996028f4-9ce3-4978-95f9-5a49e177c593"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.913773 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data\") pod \"996028f4-9ce3-4978-95f9-5a49e177c593\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.913825 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data-custom\") pod \"996028f4-9ce3-4978-95f9-5a49e177c593\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.913880 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-combined-ca-bundle\") pod \"996028f4-9ce3-4978-95f9-5a49e177c593\" (UID: \"996028f4-9ce3-4978-95f9-5a49e177c593\") " Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.914325 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/996028f4-9ce3-4978-95f9-5a49e177c593-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.919989 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-scripts" (OuterVolumeSpecName: "scripts") pod "996028f4-9ce3-4978-95f9-5a49e177c593" (UID: "996028f4-9ce3-4978-95f9-5a49e177c593"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.920165 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "996028f4-9ce3-4978-95f9-5a49e177c593" (UID: "996028f4-9ce3-4978-95f9-5a49e177c593"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.929915 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996028f4-9ce3-4978-95f9-5a49e177c593-kube-api-access-kxhgm" (OuterVolumeSpecName: "kube-api-access-kxhgm") pod "996028f4-9ce3-4978-95f9-5a49e177c593" (UID: "996028f4-9ce3-4978-95f9-5a49e177c593"). InnerVolumeSpecName "kube-api-access-kxhgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:54 crc kubenswrapper[4982]: I0123 08:17:54.983873 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "996028f4-9ce3-4978-95f9-5a49e177c593" (UID: "996028f4-9ce3-4978-95f9-5a49e177c593"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.015995 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxhgm\" (UniqueName: \"kubernetes.io/projected/996028f4-9ce3-4978-95f9-5a49e177c593-kube-api-access-kxhgm\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.016032 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.016046 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.016056 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.079793 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data" (OuterVolumeSpecName: "config-data") pod "996028f4-9ce3-4978-95f9-5a49e177c593" (UID: "996028f4-9ce3-4978-95f9-5a49e177c593"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.117522 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996028f4-9ce3-4978-95f9-5a49e177c593-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.506383 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"996028f4-9ce3-4978-95f9-5a49e177c593","Type":"ContainerDied","Data":"87c86d595e598d110d86476075fe949a18fcbf2b2e06cfb5a0f3b4978759d737"} Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.506443 4982 scope.go:117] "RemoveContainer" containerID="e5ca03806bb1b5ba5583adb335bdec01b84ce903d9b6c00cb1a83855d2d2f7d6" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.506618 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.546996 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.547859 4982 scope.go:117] "RemoveContainer" containerID="1668dacd607707ea67f279b1e26aa82215a52c541a768c6d1aba82d9e25f6cd3" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.560575 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.590688 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:55 crc kubenswrapper[4982]: E0123 08:17:55.591430 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="probe" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.591450 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="probe" Jan 23 08:17:55 crc kubenswrapper[4982]: E0123 08:17:55.591471 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerName="init" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.591479 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerName="init" Jan 23 08:17:55 crc kubenswrapper[4982]: E0123 08:17:55.591492 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerName="dnsmasq-dns" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.591498 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerName="dnsmasq-dns" Jan 23 08:17:55 crc kubenswrapper[4982]: E0123 08:17:55.591508 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="cinder-scheduler" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.591513 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="cinder-scheduler" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.591768 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="probe" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.591784 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" containerName="cinder-scheduler" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.591811 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25c1411-c0cb-4eb1-a7da-d5a6a91125c8" containerName="dnsmasq-dns" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.593068 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.598207 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.619245 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.729032 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.729079 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.729125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.729379 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zllhv\" (UniqueName: \"kubernetes.io/projected/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-kube-api-access-zllhv\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.729508 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.729778 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.795203 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d49874b88-l4b2l" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": dial tcp 10.217.0.186:9311: connect: connection refused" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.795279 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d49874b88-l4b2l" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": dial tcp 10.217.0.186:9311: connect: connection refused" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.831807 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.831876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.831894 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.831925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.831978 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zllhv\" (UniqueName: \"kubernetes.io/projected/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-kube-api-access-zllhv\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.832014 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.835043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.839610 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.843481 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-scripts\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.844803 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.847749 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996028f4-9ce3-4978-95f9-5a49e177c593" path="/var/lib/kubelet/pods/996028f4-9ce3-4978-95f9-5a49e177c593/volumes" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.862614 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.866006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zllhv\" (UniqueName: \"kubernetes.io/projected/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-kube-api-access-zllhv\") pod \"cinder-scheduler-0\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " pod="openstack/cinder-scheduler-0" Jan 23 08:17:55 crc kubenswrapper[4982]: I0123 08:17:55.925678 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.107864 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.246764 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data\") pod \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.246949 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-combined-ca-bundle\") pod \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.246975 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgc6t\" (UniqueName: \"kubernetes.io/projected/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-kube-api-access-xgc6t\") pod \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.247114 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-logs\") pod \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.247146 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data-custom\") pod \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\" (UID: \"a1ee7b44-eb70-4f01-9d59-fa4c4856229a\") " Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.248339 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-logs" (OuterVolumeSpecName: "logs") pod "a1ee7b44-eb70-4f01-9d59-fa4c4856229a" (UID: "a1ee7b44-eb70-4f01-9d59-fa4c4856229a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.250946 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.254777 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a1ee7b44-eb70-4f01-9d59-fa4c4856229a" (UID: "a1ee7b44-eb70-4f01-9d59-fa4c4856229a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.255072 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-kube-api-access-xgc6t" (OuterVolumeSpecName: "kube-api-access-xgc6t") pod "a1ee7b44-eb70-4f01-9d59-fa4c4856229a" (UID: "a1ee7b44-eb70-4f01-9d59-fa4c4856229a"). InnerVolumeSpecName "kube-api-access-xgc6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.293242 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1ee7b44-eb70-4f01-9d59-fa4c4856229a" (UID: "a1ee7b44-eb70-4f01-9d59-fa4c4856229a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.335503 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data" (OuterVolumeSpecName: "config-data") pod "a1ee7b44-eb70-4f01-9d59-fa4c4856229a" (UID: "a1ee7b44-eb70-4f01-9d59-fa4c4856229a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.352922 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.352966 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.352979 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgc6t\" (UniqueName: \"kubernetes.io/projected/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-kube-api-access-xgc6t\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.352991 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1ee7b44-eb70-4f01-9d59-fa4c4856229a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.523073 4982 generic.go:334] "Generic (PLEG): container finished" podID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerID="c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa" exitCode=0 Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.523136 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d49874b88-l4b2l" event={"ID":"a1ee7b44-eb70-4f01-9d59-fa4c4856229a","Type":"ContainerDied","Data":"c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa"} Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.523162 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d49874b88-l4b2l" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.523195 4982 scope.go:117] "RemoveContainer" containerID="c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.523179 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d49874b88-l4b2l" event={"ID":"a1ee7b44-eb70-4f01-9d59-fa4c4856229a","Type":"ContainerDied","Data":"c5aa4a5ea69e49e3183e1a75e0efb573b34cd923cdc7729fc862e990338eaad2"} Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.532024 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.675019 4982 scope.go:117] "RemoveContainer" containerID="d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.681049 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d49874b88-l4b2l"] Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.691530 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7d49874b88-l4b2l"] Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.718023 4982 scope.go:117] "RemoveContainer" containerID="c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa" Jan 23 08:17:56 crc kubenswrapper[4982]: E0123 08:17:56.718434 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa\": container with ID starting with c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa not found: ID does not exist" containerID="c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.718473 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa"} err="failed to get container status \"c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa\": rpc error: code = NotFound desc = could not find container \"c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa\": container with ID starting with c4f40f4338d57884b45da8000913ad999106cbd2b286a2473f1b7c84001ccafa not found: ID does not exist" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.718501 4982 scope.go:117] "RemoveContainer" containerID="d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945" Jan 23 08:17:56 crc kubenswrapper[4982]: E0123 08:17:56.718952 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945\": container with ID starting with d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945 not found: ID does not exist" containerID="d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.719021 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945"} err="failed to get container status \"d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945\": rpc error: code = NotFound desc = could not find container \"d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945\": container with ID starting with d8c7aff9965134df2e8fa1f11362d8065b94933e346c69ffad094ae43d3a8945 not found: ID does not exist" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.979104 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-696995d9b9-k2q7r"] Jan 23 08:17:56 crc kubenswrapper[4982]: E0123 08:17:56.979718 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.979741 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api" Jan 23 08:17:56 crc kubenswrapper[4982]: E0123 08:17:56.979786 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api-log" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.979795 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api-log" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.980063 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.980090 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" containerName="barbican-api-log" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.981510 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.985663 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.986614 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.987458 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 08:17:56 crc kubenswrapper[4982]: I0123 08:17:56.988586 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-696995d9b9-k2q7r"] Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.074568 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.074647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4zl\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-kube-api-access-7z4zl\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.074670 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-internal-tls-certs\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.074691 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-public-tls-certs\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.074736 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-combined-ca-bundle\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.074964 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-config-data\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.075006 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-run-httpd\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.075121 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-log-httpd\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181205 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181277 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4zl\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-kube-api-access-7z4zl\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181305 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-internal-tls-certs\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181327 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-public-tls-certs\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181360 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-combined-ca-bundle\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181450 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-config-data\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181475 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-run-httpd\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.181520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-log-httpd\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.183092 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-log-httpd\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.184204 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-run-httpd\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.197784 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-combined-ca-bundle\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.197877 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.198093 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-config-data\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.201068 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-public-tls-certs\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.201086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-internal-tls-certs\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.205397 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4zl\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-kube-api-access-7z4zl\") pod \"swift-proxy-696995d9b9-k2q7r\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.329410 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.552513 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d81a8b1-9b52-44df-bc0b-7852d3b8570c","Type":"ContainerStarted","Data":"d59b5dc7ed6c69bba6c7d620f3e44ff6159fc8950b7d39dcc371af91ae5c4782"} Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.552879 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d81a8b1-9b52-44df-bc0b-7852d3b8570c","Type":"ContainerStarted","Data":"5756f207c29b9bc1df2bf875ecdca6a0bee6d380afe7d063c01461743a4802d6"} Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.723774 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.725057 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.728092 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.728405 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.728583 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9lr6z" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.754698 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.842440 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ee7b44-eb70-4f01-9d59-fa4c4856229a" path="/var/lib/kubelet/pods/a1ee7b44-eb70-4f01-9d59-fa4c4856229a/volumes" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.902209 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.902278 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config-secret\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.902402 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:57 crc kubenswrapper[4982]: I0123 08:17:57.902433 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-kube-api-access-dxzb5\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.002898 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-696995d9b9-k2q7r"] Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.004215 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.004298 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config-secret\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.004454 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.004501 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-kube-api-access-dxzb5\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.007138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.025120 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config-secret\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.025570 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.034425 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-kube-api-access-dxzb5\") pod \"openstackclient\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.068612 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.611886 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d81a8b1-9b52-44df-bc0b-7852d3b8570c","Type":"ContainerStarted","Data":"2fdf936f82b58284aa5deb1d6981c7ab39aeaacc42c4318726f6a5011a3f326d"} Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.614676 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.619092 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-696995d9b9-k2q7r" event={"ID":"740c5af1-ac3c-431a-ab6e-c5aecf933fea","Type":"ContainerStarted","Data":"f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f"} Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.619139 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-696995d9b9-k2q7r" event={"ID":"740c5af1-ac3c-431a-ab6e-c5aecf933fea","Type":"ContainerStarted","Data":"e54958a4fd80d213360020d01d0a880a1d40f76fc6606c7f4bd348f4c29a1ca6"} Jan 23 08:17:58 crc kubenswrapper[4982]: I0123 08:17:58.696968 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.696942584 podStartE2EDuration="3.696942584s" podCreationTimestamp="2026-01-23 08:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:58.651303786 +0000 UTC m=+1353.131667192" watchObservedRunningTime="2026-01-23 08:17:58.696942584 +0000 UTC m=+1353.177305970" Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.199000 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.291497 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.291859 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-central-agent" containerID="cri-o://a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e" gracePeriod=30 Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.292779 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="sg-core" containerID="cri-o://6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412" gracePeriod=30 Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.292779 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="proxy-httpd" containerID="cri-o://e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a" gracePeriod=30 Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.292864 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-notification-agent" containerID="cri-o://b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f" gracePeriod=30 Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.300404 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.638233 4982 generic.go:334] "Generic (PLEG): container finished" podID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerID="e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a" exitCode=0 Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.638566 4982 generic.go:334] "Generic (PLEG): container finished" podID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerID="6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412" exitCode=2 Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.638265 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerDied","Data":"e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a"} Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.638650 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerDied","Data":"6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412"} Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.641232 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0","Type":"ContainerStarted","Data":"51001371d5909f68f8d1b9289504828f14ff4e8f34b95fd96742f07f05b26f22"} Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.644177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-696995d9b9-k2q7r" event={"ID":"740c5af1-ac3c-431a-ab6e-c5aecf933fea","Type":"ContainerStarted","Data":"2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54"} Jan 23 08:17:59 crc kubenswrapper[4982]: I0123 08:17:59.672758 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-696995d9b9-k2q7r" podStartSLOduration=3.672729418 podStartE2EDuration="3.672729418s" podCreationTimestamp="2026-01-23 08:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:17:59.666616355 +0000 UTC m=+1354.146979751" watchObservedRunningTime="2026-01-23 08:17:59.672729418 +0000 UTC m=+1354.153092804" Jan 23 08:18:00 crc kubenswrapper[4982]: I0123 08:18:00.655919 4982 generic.go:334] "Generic (PLEG): container finished" podID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerID="a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e" exitCode=0 Jan 23 08:18:00 crc kubenswrapper[4982]: I0123 08:18:00.656008 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerDied","Data":"a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e"} Jan 23 08:18:00 crc kubenswrapper[4982]: I0123 08:18:00.656431 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:18:00 crc kubenswrapper[4982]: I0123 08:18:00.656452 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:18:00 crc kubenswrapper[4982]: I0123 08:18:00.926825 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.230033 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.340774 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-run-httpd\") pod \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.341169 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcjg4\" (UniqueName: \"kubernetes.io/projected/05941603-ac04-4d11-bf57-a2ad73b9b5bc-kube-api-access-rcjg4\") pod \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.341201 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-log-httpd\") pod \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.341274 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-scripts\") pod \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.341329 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-sg-core-conf-yaml\") pod \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.341345 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-combined-ca-bundle\") pod \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.341446 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-config-data\") pod \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\" (UID: \"05941603-ac04-4d11-bf57-a2ad73b9b5bc\") " Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.343810 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "05941603-ac04-4d11-bf57-a2ad73b9b5bc" (UID: "05941603-ac04-4d11-bf57-a2ad73b9b5bc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.343843 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "05941603-ac04-4d11-bf57-a2ad73b9b5bc" (UID: "05941603-ac04-4d11-bf57-a2ad73b9b5bc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.363019 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-scripts" (OuterVolumeSpecName: "scripts") pod "05941603-ac04-4d11-bf57-a2ad73b9b5bc" (UID: "05941603-ac04-4d11-bf57-a2ad73b9b5bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.365813 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05941603-ac04-4d11-bf57-a2ad73b9b5bc-kube-api-access-rcjg4" (OuterVolumeSpecName: "kube-api-access-rcjg4") pod "05941603-ac04-4d11-bf57-a2ad73b9b5bc" (UID: "05941603-ac04-4d11-bf57-a2ad73b9b5bc"). InnerVolumeSpecName "kube-api-access-rcjg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.387265 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "05941603-ac04-4d11-bf57-a2ad73b9b5bc" (UID: "05941603-ac04-4d11-bf57-a2ad73b9b5bc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.439415 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05941603-ac04-4d11-bf57-a2ad73b9b5bc" (UID: "05941603-ac04-4d11-bf57-a2ad73b9b5bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.444225 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.444255 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.444264 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.444274 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcjg4\" (UniqueName: \"kubernetes.io/projected/05941603-ac04-4d11-bf57-a2ad73b9b5bc-kube-api-access-rcjg4\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.444285 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/05941603-ac04-4d11-bf57-a2ad73b9b5bc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.444294 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.459355 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-config-data" (OuterVolumeSpecName: "config-data") pod "05941603-ac04-4d11-bf57-a2ad73b9b5bc" (UID: "05941603-ac04-4d11-bf57-a2ad73b9b5bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.545923 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05941603-ac04-4d11-bf57-a2ad73b9b5bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.550991 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.702719 4982 generic.go:334] "Generic (PLEG): container finished" podID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerID="b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f" exitCode=0 Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.702775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerDied","Data":"b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f"} Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.702810 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"05941603-ac04-4d11-bf57-a2ad73b9b5bc","Type":"ContainerDied","Data":"2617f0b350f534f8fbc2be99af4d9786845379874798a710f17ba71f36408618"} Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.702835 4982 scope.go:117] "RemoveContainer" containerID="e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.703044 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.752176 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.771009 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.788255 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:03 crc kubenswrapper[4982]: E0123 08:18:03.795803 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-central-agent" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.795846 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-central-agent" Jan 23 08:18:03 crc kubenswrapper[4982]: E0123 08:18:03.795868 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="proxy-httpd" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.795876 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="proxy-httpd" Jan 23 08:18:03 crc kubenswrapper[4982]: E0123 08:18:03.795917 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="sg-core" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.795924 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="sg-core" Jan 23 08:18:03 crc kubenswrapper[4982]: E0123 08:18:03.795942 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-notification-agent" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.795949 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-notification-agent" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.796135 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-notification-agent" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.796170 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="sg-core" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.796179 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="proxy-httpd" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.796196 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" containerName="ceilometer-central-agent" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.798984 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.808642 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.810285 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.815840 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.845164 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05941603-ac04-4d11-bf57-a2ad73b9b5bc" path="/var/lib/kubelet/pods/05941603-ac04-4d11-bf57-a2ad73b9b5bc/volumes" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.958287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.958855 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.958905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-log-httpd\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.958959 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-scripts\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.959083 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-run-httpd\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.959109 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6fq4\" (UniqueName: \"kubernetes.io/projected/7eb1d299-9c2c-429a-b19d-988f037af7fb-kube-api-access-s6fq4\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:03 crc kubenswrapper[4982]: I0123 08:18:03.959223 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-config-data\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.061933 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-config-data\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062077 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062127 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062166 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-log-httpd\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062226 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-scripts\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062312 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-run-httpd\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6fq4\" (UniqueName: \"kubernetes.io/projected/7eb1d299-9c2c-429a-b19d-988f037af7fb-kube-api-access-s6fq4\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062885 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-run-httpd\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.062941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-log-httpd\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.068053 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.068651 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-config-data\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.072041 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.072478 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-scripts\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.081129 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6fq4\" (UniqueName: \"kubernetes.io/projected/7eb1d299-9c2c-429a-b19d-988f037af7fb-kube-api-access-s6fq4\") pod \"ceilometer-0\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " pod="openstack/ceilometer-0" Jan 23 08:18:04 crc kubenswrapper[4982]: I0123 08:18:04.178720 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:06 crc kubenswrapper[4982]: I0123 08:18:06.192488 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 08:18:06 crc kubenswrapper[4982]: I0123 08:18:06.504780 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:18:06 crc kubenswrapper[4982]: I0123 08:18:06.574433 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76d579d464-xmhp9"] Jan 23 08:18:06 crc kubenswrapper[4982]: I0123 08:18:06.574787 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76d579d464-xmhp9" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-api" containerID="cri-o://a9cb9e8b1eabd25f0f4033246745d7188d6551abcf1f806565c3d8557778a2d4" gracePeriod=30 Jan 23 08:18:06 crc kubenswrapper[4982]: I0123 08:18:06.574857 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76d579d464-xmhp9" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-httpd" containerID="cri-o://b0dc2778be8e3e91a2775231145b32b7176df87ebc194db92e4d240e2fd396e1" gracePeriod=30 Jan 23 08:18:07 crc kubenswrapper[4982]: I0123 08:18:07.338127 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:18:07 crc kubenswrapper[4982]: I0123 08:18:07.339900 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:18:07 crc kubenswrapper[4982]: I0123 08:18:07.741541 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:07 crc kubenswrapper[4982]: I0123 08:18:07.765561 4982 generic.go:334] "Generic (PLEG): container finished" podID="b5234ba0-f780-4929-b3b1-8db96026770c" containerID="b0dc2778be8e3e91a2775231145b32b7176df87ebc194db92e4d240e2fd396e1" exitCode=0 Jan 23 08:18:07 crc kubenswrapper[4982]: I0123 08:18:07.768164 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76d579d464-xmhp9" event={"ID":"b5234ba0-f780-4929-b3b1-8db96026770c","Type":"ContainerDied","Data":"b0dc2778be8e3e91a2775231145b32b7176df87ebc194db92e4d240e2fd396e1"} Jan 23 08:18:08 crc kubenswrapper[4982]: I0123 08:18:08.787332 4982 generic.go:334] "Generic (PLEG): container finished" podID="b5234ba0-f780-4929-b3b1-8db96026770c" containerID="a9cb9e8b1eabd25f0f4033246745d7188d6551abcf1f806565c3d8557778a2d4" exitCode=0 Jan 23 08:18:08 crc kubenswrapper[4982]: I0123 08:18:08.787669 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76d579d464-xmhp9" event={"ID":"b5234ba0-f780-4929-b3b1-8db96026770c","Type":"ContainerDied","Data":"a9cb9e8b1eabd25f0f4033246745d7188d6551abcf1f806565c3d8557778a2d4"} Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.642855 4982 scope.go:117] "RemoveContainer" containerID="6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.705554 4982 scope.go:117] "RemoveContainer" containerID="b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.776928 4982 scope.go:117] "RemoveContainer" containerID="a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.910432 4982 scope.go:117] "RemoveContainer" containerID="e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a" Jan 23 08:18:09 crc kubenswrapper[4982]: E0123 08:18:09.914880 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a\": container with ID starting with e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a not found: ID does not exist" containerID="e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.914967 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a"} err="failed to get container status \"e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a\": rpc error: code = NotFound desc = could not find container \"e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a\": container with ID starting with e0d8d6fbbc65b9c8117899c18f7f03b8f7d6d2ee41b060a0a0e623dacbbdd31a not found: ID does not exist" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.915002 4982 scope.go:117] "RemoveContainer" containerID="6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412" Jan 23 08:18:09 crc kubenswrapper[4982]: E0123 08:18:09.917119 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412\": container with ID starting with 6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412 not found: ID does not exist" containerID="6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.917188 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412"} err="failed to get container status \"6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412\": rpc error: code = NotFound desc = could not find container \"6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412\": container with ID starting with 6042cc32c08cd1fc66fc3f6e7677d732b985978e0d8511101b13e476ac13a412 not found: ID does not exist" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.917237 4982 scope.go:117] "RemoveContainer" containerID="b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f" Jan 23 08:18:09 crc kubenswrapper[4982]: E0123 08:18:09.920284 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f\": container with ID starting with b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f not found: ID does not exist" containerID="b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.920883 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f"} err="failed to get container status \"b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f\": rpc error: code = NotFound desc = could not find container \"b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f\": container with ID starting with b2922e20afded5cdb9021797d96b34dfec5b871dc27dabcee0ea9c4d208a1a3f not found: ID does not exist" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.921881 4982 scope.go:117] "RemoveContainer" containerID="a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e" Jan 23 08:18:09 crc kubenswrapper[4982]: E0123 08:18:09.922410 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e\": container with ID starting with a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e not found: ID does not exist" containerID="a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.922438 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e"} err="failed to get container status \"a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e\": rpc error: code = NotFound desc = could not find container \"a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e\": container with ID starting with a3fc513c882cecc2ad8c82d41b7073c84e458e04049f51247852c50dd54fc82e not found: ID does not exist" Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.984500 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.984741 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-log" containerID="cri-o://5f49e46ef3f184cffc0a3f5151239a6c85e934892cf0f9301b74cc75f12bd907" gracePeriod=30 Jan 23 08:18:09 crc kubenswrapper[4982]: I0123 08:18:09.984868 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-httpd" containerID="cri-o://fa2e7ff99f086c9757ad235e8999d715b4462e740d7cca574f91d905931f959d" gracePeriod=30 Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.063459 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.209042 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-combined-ca-bundle\") pod \"b5234ba0-f780-4929-b3b1-8db96026770c\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.209480 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-httpd-config\") pod \"b5234ba0-f780-4929-b3b1-8db96026770c\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.209533 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwtf5\" (UniqueName: \"kubernetes.io/projected/b5234ba0-f780-4929-b3b1-8db96026770c-kube-api-access-kwtf5\") pod \"b5234ba0-f780-4929-b3b1-8db96026770c\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.209579 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-config\") pod \"b5234ba0-f780-4929-b3b1-8db96026770c\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.209644 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-ovndb-tls-certs\") pod \"b5234ba0-f780-4929-b3b1-8db96026770c\" (UID: \"b5234ba0-f780-4929-b3b1-8db96026770c\") " Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.218865 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5234ba0-f780-4929-b3b1-8db96026770c-kube-api-access-kwtf5" (OuterVolumeSpecName: "kube-api-access-kwtf5") pod "b5234ba0-f780-4929-b3b1-8db96026770c" (UID: "b5234ba0-f780-4929-b3b1-8db96026770c"). InnerVolumeSpecName "kube-api-access-kwtf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.222729 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b5234ba0-f780-4929-b3b1-8db96026770c" (UID: "b5234ba0-f780-4929-b3b1-8db96026770c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.283113 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-config" (OuterVolumeSpecName: "config") pod "b5234ba0-f780-4929-b3b1-8db96026770c" (UID: "b5234ba0-f780-4929-b3b1-8db96026770c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.287490 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5234ba0-f780-4929-b3b1-8db96026770c" (UID: "b5234ba0-f780-4929-b3b1-8db96026770c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.311656 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.311693 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.311703 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwtf5\" (UniqueName: \"kubernetes.io/projected/b5234ba0-f780-4929-b3b1-8db96026770c-kube-api-access-kwtf5\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.311714 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.322747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b5234ba0-f780-4929-b3b1-8db96026770c" (UID: "b5234ba0-f780-4929-b3b1-8db96026770c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.411264 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.413748 4982 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5234ba0-f780-4929-b3b1-8db96026770c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:10 crc kubenswrapper[4982]: W0123 08:18:10.420829 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eb1d299_9c2c_429a_b19d_988f037af7fb.slice/crio-7ef4bbab4a6e91b1f9e026e81f96851c057904ca228f13e9dc1f378111b16d60 WatchSource:0}: Error finding container 7ef4bbab4a6e91b1f9e026e81f96851c057904ca228f13e9dc1f378111b16d60: Status 404 returned error can't find the container with id 7ef4bbab4a6e91b1f9e026e81f96851c057904ca228f13e9dc1f378111b16d60 Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.424499 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.836426 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerStarted","Data":"7ef4bbab4a6e91b1f9e026e81f96851c057904ca228f13e9dc1f378111b16d60"} Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.838807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0","Type":"ContainerStarted","Data":"5681b28d1051a5f8acd2929e33682deb65b19b740cd2b6c2b2d8317baaa49110"} Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.841142 4982 generic.go:334] "Generic (PLEG): container finished" podID="305a601b-3073-40b7-86d8-17b02bffba9e" containerID="5f49e46ef3f184cffc0a3f5151239a6c85e934892cf0f9301b74cc75f12bd907" exitCode=143 Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.841235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"305a601b-3073-40b7-86d8-17b02bffba9e","Type":"ContainerDied","Data":"5f49e46ef3f184cffc0a3f5151239a6c85e934892cf0f9301b74cc75f12bd907"} Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.843345 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76d579d464-xmhp9" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.843344 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76d579d464-xmhp9" event={"ID":"b5234ba0-f780-4929-b3b1-8db96026770c","Type":"ContainerDied","Data":"3c8b2192eb6ed4b280a7e8020c44a4474f21b9aebcadf002fb513144d1b30e24"} Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.844264 4982 scope.go:117] "RemoveContainer" containerID="b0dc2778be8e3e91a2775231145b32b7176df87ebc194db92e4d240e2fd396e1" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.866310 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.737514987 podStartE2EDuration="13.866291964s" podCreationTimestamp="2026-01-23 08:17:57 +0000 UTC" firstStartedPulling="2026-01-23 08:17:58.65446915 +0000 UTC m=+1353.134832536" lastFinishedPulling="2026-01-23 08:18:09.783246127 +0000 UTC m=+1364.263609513" observedRunningTime="2026-01-23 08:18:10.859817681 +0000 UTC m=+1365.340181067" watchObservedRunningTime="2026-01-23 08:18:10.866291964 +0000 UTC m=+1365.346655350" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.876759 4982 scope.go:117] "RemoveContainer" containerID="a9cb9e8b1eabd25f0f4033246745d7188d6551abcf1f806565c3d8557778a2d4" Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.897041 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76d579d464-xmhp9"] Jan 23 08:18:10 crc kubenswrapper[4982]: I0123 08:18:10.909173 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-76d579d464-xmhp9"] Jan 23 08:18:11 crc kubenswrapper[4982]: I0123 08:18:11.436233 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:18:11 crc kubenswrapper[4982]: I0123 08:18:11.436310 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:18:11 crc kubenswrapper[4982]: I0123 08:18:11.842362 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" path="/var/lib/kubelet/pods/b5234ba0-f780-4929-b3b1-8db96026770c/volumes" Jan 23 08:18:11 crc kubenswrapper[4982]: I0123 08:18:11.863550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerStarted","Data":"9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a"} Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.746441 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7jlbt"] Jan 23 08:18:12 crc kubenswrapper[4982]: E0123 08:18:12.747455 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-httpd" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.747475 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-httpd" Jan 23 08:18:12 crc kubenswrapper[4982]: E0123 08:18:12.747521 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-api" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.747529 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-api" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.747857 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-api" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.747882 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5234ba0-f780-4929-b3b1-8db96026770c" containerName="neutron-httpd" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.753382 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.759556 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jlbt"] Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.800763 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-catalog-content\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.800848 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-utilities\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.800912 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45w4\" (UniqueName: \"kubernetes.io/projected/b12bf1c3-1f7c-4ac1-9af2-748091012432-kube-api-access-c45w4\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.882933 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerStarted","Data":"cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e"} Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.904031 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-catalog-content\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.904113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-utilities\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.904168 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c45w4\" (UniqueName: \"kubernetes.io/projected/b12bf1c3-1f7c-4ac1-9af2-748091012432-kube-api-access-c45w4\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.905040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-catalog-content\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.905315 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-utilities\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:12 crc kubenswrapper[4982]: I0123 08:18:12.936364 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45w4\" (UniqueName: \"kubernetes.io/projected/b12bf1c3-1f7c-4ac1-9af2-748091012432-kube-api-access-c45w4\") pod \"redhat-operators-7jlbt\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.125846 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.748049 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jlbt"] Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.914395 4982 generic.go:334] "Generic (PLEG): container finished" podID="305a601b-3073-40b7-86d8-17b02bffba9e" containerID="fa2e7ff99f086c9757ad235e8999d715b4462e740d7cca574f91d905931f959d" exitCode=0 Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.914500 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"305a601b-3073-40b7-86d8-17b02bffba9e","Type":"ContainerDied","Data":"fa2e7ff99f086c9757ad235e8999d715b4462e740d7cca574f91d905931f959d"} Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.914530 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"305a601b-3073-40b7-86d8-17b02bffba9e","Type":"ContainerDied","Data":"e0078e76c9632908bc91ce1e541d7cb54698df32e9cda56c6cf131a9aafa19a5"} Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.914539 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0078e76c9632908bc91ce1e541d7cb54698df32e9cda56c6cf131a9aafa19a5" Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.917435 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jlbt" event={"ID":"b12bf1c3-1f7c-4ac1-9af2-748091012432","Type":"ContainerStarted","Data":"c18b11202992711cbabe99c7eaba90eb87df1907f8378500edf9e230513fc6b5"} Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.922504 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerStarted","Data":"ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5"} Jan 23 08:18:13 crc kubenswrapper[4982]: I0123 08:18:13.943546 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.039361 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-config-data\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.039919 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-combined-ca-bundle\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.040009 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.040053 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-internal-tls-certs\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.040234 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-logs\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.040269 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-httpd-run\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.040447 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-scripts\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.040508 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjgz\" (UniqueName: \"kubernetes.io/projected/305a601b-3073-40b7-86d8-17b02bffba9e-kube-api-access-kmjgz\") pod \"305a601b-3073-40b7-86d8-17b02bffba9e\" (UID: \"305a601b-3073-40b7-86d8-17b02bffba9e\") " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.042155 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-logs" (OuterVolumeSpecName: "logs") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.042347 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.051158 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305a601b-3073-40b7-86d8-17b02bffba9e-kube-api-access-kmjgz" (OuterVolumeSpecName: "kube-api-access-kmjgz") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "kube-api-access-kmjgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.057913 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.064372 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-scripts" (OuterVolumeSpecName: "scripts") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.098106 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.144056 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.144101 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/305a601b-3073-40b7-86d8-17b02bffba9e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.144113 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.144124 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmjgz\" (UniqueName: \"kubernetes.io/projected/305a601b-3073-40b7-86d8-17b02bffba9e-kube-api-access-kmjgz\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.144136 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.144164 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.190950 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.230846 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.242395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-config-data" (OuterVolumeSpecName: "config-data") pod "305a601b-3073-40b7-86d8-17b02bffba9e" (UID: "305a601b-3073-40b7-86d8-17b02bffba9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.247790 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.247824 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.247837 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305a601b-3073-40b7-86d8-17b02bffba9e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.369280 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8scdh"] Jan 23 08:18:14 crc kubenswrapper[4982]: E0123 08:18:14.369840 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-log" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.369858 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-log" Jan 23 08:18:14 crc kubenswrapper[4982]: E0123 08:18:14.369899 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-httpd" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.369906 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-httpd" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.370091 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-log" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.370123 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" containerName="glance-httpd" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.370842 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.387337 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8scdh"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.452439 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7300e5-e28d-4690-b05f-1cfb0c034c71-operator-scripts\") pod \"nova-api-db-create-8scdh\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.452514 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7r9l\" (UniqueName: \"kubernetes.io/projected/9b7300e5-e28d-4690-b05f-1cfb0c034c71-kube-api-access-c7r9l\") pod \"nova-api-db-create-8scdh\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.478914 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kfcmb"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.480506 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.501717 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6759-account-create-update-rmsm9"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.503143 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.507240 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.518349 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kfcmb"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.531589 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6759-account-create-update-rmsm9"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.561806 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7300e5-e28d-4690-b05f-1cfb0c034c71-operator-scripts\") pod \"nova-api-db-create-8scdh\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.561891 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7r9l\" (UniqueName: \"kubernetes.io/projected/9b7300e5-e28d-4690-b05f-1cfb0c034c71-kube-api-access-c7r9l\") pod \"nova-api-db-create-8scdh\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.600526 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7300e5-e28d-4690-b05f-1cfb0c034c71-operator-scripts\") pod \"nova-api-db-create-8scdh\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.635501 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xtlkt"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.641667 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.655535 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xtlkt"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.666722 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpd4\" (UniqueName: \"kubernetes.io/projected/e338b64b-08c9-4c05-b50d-4d476391ef43-kube-api-access-sdpd4\") pod \"nova-cell0-db-create-kfcmb\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.667710 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbjs\" (UniqueName: \"kubernetes.io/projected/b8e66ea6-0018-46c2-9c01-70f84bc2f166-kube-api-access-vgbjs\") pod \"nova-api-6759-account-create-update-rmsm9\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.667850 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e66ea6-0018-46c2-9c01-70f84bc2f166-operator-scripts\") pod \"nova-api-6759-account-create-update-rmsm9\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.671755 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e338b64b-08c9-4c05-b50d-4d476391ef43-operator-scripts\") pod \"nova-cell0-db-create-kfcmb\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.678862 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7r9l\" (UniqueName: \"kubernetes.io/projected/9b7300e5-e28d-4690-b05f-1cfb0c034c71-kube-api-access-c7r9l\") pod \"nova-api-db-create-8scdh\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.687667 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b9da-account-create-update-f8dpr"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.688961 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.694815 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.704346 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.744807 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b9da-account-create-update-f8dpr"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.783689 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e338b64b-08c9-4c05-b50d-4d476391ef43-operator-scripts\") pod \"nova-cell0-db-create-kfcmb\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.783772 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b061f3a-644c-426d-8c3f-d53f664a2703-operator-scripts\") pod \"nova-cell1-db-create-xtlkt\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.783835 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdpd4\" (UniqueName: \"kubernetes.io/projected/e338b64b-08c9-4c05-b50d-4d476391ef43-kube-api-access-sdpd4\") pod \"nova-cell0-db-create-kfcmb\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.784105 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbjs\" (UniqueName: \"kubernetes.io/projected/b8e66ea6-0018-46c2-9c01-70f84bc2f166-kube-api-access-vgbjs\") pod \"nova-api-6759-account-create-update-rmsm9\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.784150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e66ea6-0018-46c2-9c01-70f84bc2f166-operator-scripts\") pod \"nova-api-6759-account-create-update-rmsm9\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.784209 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q7kz\" (UniqueName: \"kubernetes.io/projected/9b061f3a-644c-426d-8c3f-d53f664a2703-kube-api-access-5q7kz\") pod \"nova-cell1-db-create-xtlkt\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.786854 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e66ea6-0018-46c2-9c01-70f84bc2f166-operator-scripts\") pod \"nova-api-6759-account-create-update-rmsm9\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.787094 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e338b64b-08c9-4c05-b50d-4d476391ef43-operator-scripts\") pod \"nova-cell0-db-create-kfcmb\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.810236 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbjs\" (UniqueName: \"kubernetes.io/projected/b8e66ea6-0018-46c2-9c01-70f84bc2f166-kube-api-access-vgbjs\") pod \"nova-api-6759-account-create-update-rmsm9\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.811933 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdpd4\" (UniqueName: \"kubernetes.io/projected/e338b64b-08c9-4c05-b50d-4d476391ef43-kube-api-access-sdpd4\") pod \"nova-cell0-db-create-kfcmb\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.823358 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.888176 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5pqx\" (UniqueName: \"kubernetes.io/projected/77a06e0c-0060-42b6-a09c-a805ea7b2878-kube-api-access-m5pqx\") pod \"nova-cell0-b9da-account-create-update-f8dpr\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.888225 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77a06e0c-0060-42b6-a09c-a805ea7b2878-operator-scripts\") pod \"nova-cell0-b9da-account-create-update-f8dpr\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.888316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q7kz\" (UniqueName: \"kubernetes.io/projected/9b061f3a-644c-426d-8c3f-d53f664a2703-kube-api-access-5q7kz\") pod \"nova-cell1-db-create-xtlkt\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.888359 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b061f3a-644c-426d-8c3f-d53f664a2703-operator-scripts\") pod \"nova-cell1-db-create-xtlkt\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.889099 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b061f3a-644c-426d-8c3f-d53f664a2703-operator-scripts\") pod \"nova-cell1-db-create-xtlkt\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.901673 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-83ff-account-create-update-9nrqn"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.903102 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.908210 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.910924 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q7kz\" (UniqueName: \"kubernetes.io/projected/9b061f3a-644c-426d-8c3f-d53f664a2703-kube-api-access-5q7kz\") pod \"nova-cell1-db-create-xtlkt\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.912146 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83ff-account-create-update-9nrqn"] Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.947854 4982 generic.go:334] "Generic (PLEG): container finished" podID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerID="2d66b559ce3dd498f332e2950e63d2b4e69d1a2f733bc6a21bb0158af3a2813b" exitCode=0 Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.948005 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.948868 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jlbt" event={"ID":"b12bf1c3-1f7c-4ac1-9af2-748091012432","Type":"ContainerDied","Data":"2d66b559ce3dd498f332e2950e63d2b4e69d1a2f733bc6a21bb0158af3a2813b"} Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.991297 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5pqx\" (UniqueName: \"kubernetes.io/projected/77a06e0c-0060-42b6-a09c-a805ea7b2878-kube-api-access-m5pqx\") pod \"nova-cell0-b9da-account-create-update-f8dpr\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.991704 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77a06e0c-0060-42b6-a09c-a805ea7b2878-operator-scripts\") pod \"nova-cell0-b9da-account-create-update-f8dpr\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:14 crc kubenswrapper[4982]: I0123 08:18:14.994791 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77a06e0c-0060-42b6-a09c-a805ea7b2878-operator-scripts\") pod \"nova-cell0-b9da-account-create-update-f8dpr\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.026340 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5pqx\" (UniqueName: \"kubernetes.io/projected/77a06e0c-0060-42b6-a09c-a805ea7b2878-kube-api-access-m5pqx\") pod \"nova-cell0-b9da-account-create-update-f8dpr\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.045711 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.063982 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.068357 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.092728 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.095305 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pz8h\" (UniqueName: \"kubernetes.io/projected/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-kube-api-access-7pz8h\") pod \"nova-cell1-83ff-account-create-update-9nrqn\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.095363 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-operator-scripts\") pod \"nova-cell1-83ff-account-create-update-9nrqn\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.101463 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.101679 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.120096 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.120894 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.126397 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.126687 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.198515 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pz8h\" (UniqueName: \"kubernetes.io/projected/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-kube-api-access-7pz8h\") pod \"nova-cell1-83ff-account-create-update-9nrqn\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.198562 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-operator-scripts\") pod \"nova-cell1-83ff-account-create-update-9nrqn\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.220916 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-operator-scripts\") pod \"nova-cell1-83ff-account-create-update-9nrqn\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.237165 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pz8h\" (UniqueName: \"kubernetes.io/projected/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-kube-api-access-7pz8h\") pod \"nova-cell1-83ff-account-create-update-9nrqn\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.247209 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.302483 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqxtw\" (UniqueName: \"kubernetes.io/projected/497bf205-5985-411f-960e-2364563ae9e1-kube-api-access-lqxtw\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.302560 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.302652 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.302711 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.302761 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.302788 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.302852 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.312781 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414506 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414573 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414609 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414639 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414685 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414706 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414771 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxtw\" (UniqueName: \"kubernetes.io/projected/497bf205-5985-411f-960e-2364563ae9e1-kube-api-access-lqxtw\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.414803 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.418395 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.418863 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.419451 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-logs\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.422030 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.422752 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.426617 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.430417 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.446359 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxtw\" (UniqueName: \"kubernetes.io/projected/497bf205-5985-411f-960e-2364563ae9e1-kube-api-access-lqxtw\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.544968 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.587857 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8scdh"] Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.670187 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.764655 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6759-account-create-update-rmsm9"] Jan 23 08:18:15 crc kubenswrapper[4982]: W0123 08:18:15.776745 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8e66ea6_0018_46c2_9c01_70f84bc2f166.slice/crio-e9952969a6b4b208b9ed22d1c44c2bc462f28e6eff8424d9ca647321f8a239b1 WatchSource:0}: Error finding container e9952969a6b4b208b9ed22d1c44c2bc462f28e6eff8424d9ca647321f8a239b1: Status 404 returned error can't find the container with id e9952969a6b4b208b9ed22d1c44c2bc462f28e6eff8424d9ca647321f8a239b1 Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.862349 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305a601b-3073-40b7-86d8-17b02bffba9e" path="/var/lib/kubelet/pods/305a601b-3073-40b7-86d8-17b02bffba9e/volumes" Jan 23 08:18:15 crc kubenswrapper[4982]: I0123 08:18:15.983098 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xtlkt"] Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.001946 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6759-account-create-update-rmsm9" event={"ID":"b8e66ea6-0018-46c2-9c01-70f84bc2f166","Type":"ContainerStarted","Data":"e9952969a6b4b208b9ed22d1c44c2bc462f28e6eff8424d9ca647321f8a239b1"} Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.020398 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8scdh" event={"ID":"9b7300e5-e28d-4690-b05f-1cfb0c034c71","Type":"ContainerStarted","Data":"359996c1743cb7e90b4d2853dd4efb9d6dfccb7579be55f070340964d012524b"} Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.020453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8scdh" event={"ID":"9b7300e5-e28d-4690-b05f-1cfb0c034c71","Type":"ContainerStarted","Data":"560c8e379e46998c102d1dc97339668ceaa951583229801ad35ae28e09901f36"} Jan 23 08:18:16 crc kubenswrapper[4982]: W0123 08:18:16.020520 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b061f3a_644c_426d_8c3f_d53f664a2703.slice/crio-d418d1a21fde90425b0a68cb71d69817c346e3dd41ef0dc9b7fa9468879ab3f1 WatchSource:0}: Error finding container d418d1a21fde90425b0a68cb71d69817c346e3dd41ef0dc9b7fa9468879ab3f1: Status 404 returned error can't find the container with id d418d1a21fde90425b0a68cb71d69817c346e3dd41ef0dc9b7fa9468879ab3f1 Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.038555 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerStarted","Data":"f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba"} Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.039347 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-central-agent" containerID="cri-o://9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a" gracePeriod=30 Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.041458 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="proxy-httpd" containerID="cri-o://f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba" gracePeriod=30 Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.041563 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="sg-core" containerID="cri-o://ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5" gracePeriod=30 Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.041649 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-notification-agent" containerID="cri-o://cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e" gracePeriod=30 Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.063717 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kfcmb"] Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.071831 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b9da-account-create-update-f8dpr"] Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.075729 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-8scdh" podStartSLOduration=2.07570857 podStartE2EDuration="2.07570857s" podCreationTimestamp="2026-01-23 08:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:16.056515637 +0000 UTC m=+1370.536879043" watchObservedRunningTime="2026-01-23 08:18:16.07570857 +0000 UTC m=+1370.556071956" Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.088863 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.718484668 podStartE2EDuration="13.088847681s" podCreationTimestamp="2026-01-23 08:18:03 +0000 UTC" firstStartedPulling="2026-01-23 08:18:10.4242281 +0000 UTC m=+1364.904591486" lastFinishedPulling="2026-01-23 08:18:14.794591113 +0000 UTC m=+1369.274954499" observedRunningTime="2026-01-23 08:18:16.087954847 +0000 UTC m=+1370.568318253" watchObservedRunningTime="2026-01-23 08:18:16.088847681 +0000 UTC m=+1370.569211067" Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.173960 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-83ff-account-create-update-9nrqn"] Jan 23 08:18:16 crc kubenswrapper[4982]: I0123 08:18:16.423496 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:18:16 crc kubenswrapper[4982]: W0123 08:18:16.428105 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod497bf205_5985_411f_960e_2364563ae9e1.slice/crio-d724c0b597cff2bd1aeba031b86c5e81de3f3267b8d9c5a73da8c709b7c8d5fe WatchSource:0}: Error finding container d724c0b597cff2bd1aeba031b86c5e81de3f3267b8d9c5a73da8c709b7c8d5fe: Status 404 returned error can't find the container with id d724c0b597cff2bd1aeba031b86c5e81de3f3267b8d9c5a73da8c709b7c8d5fe Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.053851 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"497bf205-5985-411f-960e-2364563ae9e1","Type":"ContainerStarted","Data":"d724c0b597cff2bd1aeba031b86c5e81de3f3267b8d9c5a73da8c709b7c8d5fe"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.061783 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6759-account-create-update-rmsm9" event={"ID":"b8e66ea6-0018-46c2-9c01-70f84bc2f166","Type":"ContainerStarted","Data":"e457861fdfc37c83462e545101e59ed7ce0c7365f3d95c0c6306521b20072e73"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.065833 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b7300e5-e28d-4690-b05f-1cfb0c034c71" containerID="359996c1743cb7e90b4d2853dd4efb9d6dfccb7579be55f070340964d012524b" exitCode=0 Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.065912 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8scdh" event={"ID":"9b7300e5-e28d-4690-b05f-1cfb0c034c71","Type":"ContainerDied","Data":"359996c1743cb7e90b4d2853dd4efb9d6dfccb7579be55f070340964d012524b"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.071135 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jlbt" event={"ID":"b12bf1c3-1f7c-4ac1-9af2-748091012432","Type":"ContainerStarted","Data":"d7ea125c2290092c8acff44f21a32abe42c5b61462b240a3cd29720011179b31"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.078054 4982 generic.go:334] "Generic (PLEG): container finished" podID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerID="f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba" exitCode=0 Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.078089 4982 generic.go:334] "Generic (PLEG): container finished" podID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerID="ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5" exitCode=2 Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.078097 4982 generic.go:334] "Generic (PLEG): container finished" podID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerID="cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e" exitCode=0 Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.078406 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerDied","Data":"f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.078439 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerDied","Data":"ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.078481 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerDied","Data":"cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.088210 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" event={"ID":"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5","Type":"ContainerStarted","Data":"00f8d8de5d875cede43c41fa7c25b37bdf2460d54d84168ccc836e2dd4d81ffa"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.088268 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" event={"ID":"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5","Type":"ContainerStarted","Data":"3a683b7f9db6bb7e6d08ce2a179d8407723e4de8134f9b44bae0225d3c0f288d"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.090333 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-6759-account-create-update-rmsm9" podStartSLOduration=3.090322061 podStartE2EDuration="3.090322061s" podCreationTimestamp="2026-01-23 08:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:17.081370222 +0000 UTC m=+1371.561733608" watchObservedRunningTime="2026-01-23 08:18:17.090322061 +0000 UTC m=+1371.570685447" Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.103502 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kfcmb" event={"ID":"e338b64b-08c9-4c05-b50d-4d476391ef43","Type":"ContainerStarted","Data":"1ef55c9965f862203f61d478c3c3295d2f427277908086d43db10bb9f98dfb4f"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.104147 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kfcmb" event={"ID":"e338b64b-08c9-4c05-b50d-4d476391ef43","Type":"ContainerStarted","Data":"32a62b7d759b3ce4be6f0399ff84f7e9c89e4476a0c159ccb0895d43ee5a4ffe"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.111440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" event={"ID":"77a06e0c-0060-42b6-a09c-a805ea7b2878","Type":"ContainerStarted","Data":"880314eb8717f37be8615a3ae2dda79843a37da64b680c2fb416a7297f33a946"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.111773 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" event={"ID":"77a06e0c-0060-42b6-a09c-a805ea7b2878","Type":"ContainerStarted","Data":"da51018a2e017f12da4526a4edfb40900ddba19b29f5a0954674293ba0c1c561"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.115956 4982 generic.go:334] "Generic (PLEG): container finished" podID="9b061f3a-644c-426d-8c3f-d53f664a2703" containerID="a551a54cc37ab343d96eabf6ff0b6e35eb542ef58d634189f03450540bbe97a0" exitCode=0 Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.116081 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xtlkt" event={"ID":"9b061f3a-644c-426d-8c3f-d53f664a2703","Type":"ContainerDied","Data":"a551a54cc37ab343d96eabf6ff0b6e35eb542ef58d634189f03450540bbe97a0"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.116178 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xtlkt" event={"ID":"9b061f3a-644c-426d-8c3f-d53f664a2703","Type":"ContainerStarted","Data":"d418d1a21fde90425b0a68cb71d69817c346e3dd41ef0dc9b7fa9468879ab3f1"} Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.148450 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" podStartSLOduration=3.148428882 podStartE2EDuration="3.148428882s" podCreationTimestamp="2026-01-23 08:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:17.135496377 +0000 UTC m=+1371.615859763" watchObservedRunningTime="2026-01-23 08:18:17.148428882 +0000 UTC m=+1371.628792268" Jan 23 08:18:17 crc kubenswrapper[4982]: I0123 08:18:17.160713 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" podStartSLOduration=3.16069374 podStartE2EDuration="3.16069374s" podCreationTimestamp="2026-01-23 08:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:17.152129951 +0000 UTC m=+1371.632493337" watchObservedRunningTime="2026-01-23 08:18:17.16069374 +0000 UTC m=+1371.641057126" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.131646 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"497bf205-5985-411f-960e-2364563ae9e1","Type":"ContainerStarted","Data":"3cd84b06fc466f0e513e71745eb756df662bac487350768995821b496abc50d0"} Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.132207 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"497bf205-5985-411f-960e-2364563ae9e1","Type":"ContainerStarted","Data":"98cf2cebef5f1b732d6a19c80cc61916d6f7fb654b98a4e5043b8c16409a7e5f"} Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.133964 4982 generic.go:334] "Generic (PLEG): container finished" podID="e338b64b-08c9-4c05-b50d-4d476391ef43" containerID="1ef55c9965f862203f61d478c3c3295d2f427277908086d43db10bb9f98dfb4f" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.134042 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kfcmb" event={"ID":"e338b64b-08c9-4c05-b50d-4d476391ef43","Type":"ContainerDied","Data":"1ef55c9965f862203f61d478c3c3295d2f427277908086d43db10bb9f98dfb4f"} Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.144958 4982 generic.go:334] "Generic (PLEG): container finished" podID="77a06e0c-0060-42b6-a09c-a805ea7b2878" containerID="880314eb8717f37be8615a3ae2dda79843a37da64b680c2fb416a7297f33a946" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.145078 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" event={"ID":"77a06e0c-0060-42b6-a09c-a805ea7b2878","Type":"ContainerDied","Data":"880314eb8717f37be8615a3ae2dda79843a37da64b680c2fb416a7297f33a946"} Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.148976 4982 generic.go:334] "Generic (PLEG): container finished" podID="b8e66ea6-0018-46c2-9c01-70f84bc2f166" containerID="e457861fdfc37c83462e545101e59ed7ce0c7365f3d95c0c6306521b20072e73" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.149067 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6759-account-create-update-rmsm9" event={"ID":"b8e66ea6-0018-46c2-9c01-70f84bc2f166","Type":"ContainerDied","Data":"e457861fdfc37c83462e545101e59ed7ce0c7365f3d95c0c6306521b20072e73"} Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.154429 4982 generic.go:334] "Generic (PLEG): container finished" podID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerID="d7ea125c2290092c8acff44f21a32abe42c5b61462b240a3cd29720011179b31" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.154526 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jlbt" event={"ID":"b12bf1c3-1f7c-4ac1-9af2-748091012432","Type":"ContainerDied","Data":"d7ea125c2290092c8acff44f21a32abe42c5b61462b240a3cd29720011179b31"} Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.159782 4982 generic.go:334] "Generic (PLEG): container finished" podID="cbf975c1-5014-40e8-adfe-55d5d0c5bdd5" containerID="00f8d8de5d875cede43c41fa7c25b37bdf2460d54d84168ccc836e2dd4d81ffa" exitCode=0 Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.160026 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" event={"ID":"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5","Type":"ContainerDied","Data":"00f8d8de5d875cede43c41fa7c25b37bdf2460d54d84168ccc836e2dd4d81ffa"} Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.177330 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.177315074 podStartE2EDuration="3.177315074s" podCreationTimestamp="2026-01-23 08:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:18.176157443 +0000 UTC m=+1372.656520839" watchObservedRunningTime="2026-01-23 08:18:18.177315074 +0000 UTC m=+1372.657678460" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.722919 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.841374 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e338b64b-08c9-4c05-b50d-4d476391ef43-operator-scripts\") pod \"e338b64b-08c9-4c05-b50d-4d476391ef43\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.841497 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdpd4\" (UniqueName: \"kubernetes.io/projected/e338b64b-08c9-4c05-b50d-4d476391ef43-kube-api-access-sdpd4\") pod \"e338b64b-08c9-4c05-b50d-4d476391ef43\" (UID: \"e338b64b-08c9-4c05-b50d-4d476391ef43\") " Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.842281 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e338b64b-08c9-4c05-b50d-4d476391ef43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e338b64b-08c9-4c05-b50d-4d476391ef43" (UID: "e338b64b-08c9-4c05-b50d-4d476391ef43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.842428 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e338b64b-08c9-4c05-b50d-4d476391ef43-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.850609 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e338b64b-08c9-4c05-b50d-4d476391ef43-kube-api-access-sdpd4" (OuterVolumeSpecName: "kube-api-access-sdpd4") pod "e338b64b-08c9-4c05-b50d-4d476391ef43" (UID: "e338b64b-08c9-4c05-b50d-4d476391ef43"). InnerVolumeSpecName "kube-api-access-sdpd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.872488 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.890769 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.945051 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7300e5-e28d-4690-b05f-1cfb0c034c71-operator-scripts\") pod \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.945201 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7r9l\" (UniqueName: \"kubernetes.io/projected/9b7300e5-e28d-4690-b05f-1cfb0c034c71-kube-api-access-c7r9l\") pod \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\" (UID: \"9b7300e5-e28d-4690-b05f-1cfb0c034c71\") " Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.945672 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b7300e5-e28d-4690-b05f-1cfb0c034c71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b7300e5-e28d-4690-b05f-1cfb0c034c71" (UID: "9b7300e5-e28d-4690-b05f-1cfb0c034c71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.946706 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdpd4\" (UniqueName: \"kubernetes.io/projected/e338b64b-08c9-4c05-b50d-4d476391ef43-kube-api-access-sdpd4\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.946737 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b7300e5-e28d-4690-b05f-1cfb0c034c71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:18 crc kubenswrapper[4982]: I0123 08:18:18.951660 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7300e5-e28d-4690-b05f-1cfb0c034c71-kube-api-access-c7r9l" (OuterVolumeSpecName: "kube-api-access-c7r9l") pod "9b7300e5-e28d-4690-b05f-1cfb0c034c71" (UID: "9b7300e5-e28d-4690-b05f-1cfb0c034c71"). InnerVolumeSpecName "kube-api-access-c7r9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.048088 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b061f3a-644c-426d-8c3f-d53f664a2703-operator-scripts\") pod \"9b061f3a-644c-426d-8c3f-d53f664a2703\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.048277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q7kz\" (UniqueName: \"kubernetes.io/projected/9b061f3a-644c-426d-8c3f-d53f664a2703-kube-api-access-5q7kz\") pod \"9b061f3a-644c-426d-8c3f-d53f664a2703\" (UID: \"9b061f3a-644c-426d-8c3f-d53f664a2703\") " Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.048815 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7r9l\" (UniqueName: \"kubernetes.io/projected/9b7300e5-e28d-4690-b05f-1cfb0c034c71-kube-api-access-c7r9l\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.048854 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b061f3a-644c-426d-8c3f-d53f664a2703-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b061f3a-644c-426d-8c3f-d53f664a2703" (UID: "9b061f3a-644c-426d-8c3f-d53f664a2703"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.052822 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b061f3a-644c-426d-8c3f-d53f664a2703-kube-api-access-5q7kz" (OuterVolumeSpecName: "kube-api-access-5q7kz") pod "9b061f3a-644c-426d-8c3f-d53f664a2703" (UID: "9b061f3a-644c-426d-8c3f-d53f664a2703"). InnerVolumeSpecName "kube-api-access-5q7kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.151179 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q7kz\" (UniqueName: \"kubernetes.io/projected/9b061f3a-644c-426d-8c3f-d53f664a2703-kube-api-access-5q7kz\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.151213 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b061f3a-644c-426d-8c3f-d53f664a2703-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.183096 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xtlkt" event={"ID":"9b061f3a-644c-426d-8c3f-d53f664a2703","Type":"ContainerDied","Data":"d418d1a21fde90425b0a68cb71d69817c346e3dd41ef0dc9b7fa9468879ab3f1"} Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.183135 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xtlkt" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.183149 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d418d1a21fde90425b0a68cb71d69817c346e3dd41ef0dc9b7fa9468879ab3f1" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.188129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kfcmb" event={"ID":"e338b64b-08c9-4c05-b50d-4d476391ef43","Type":"ContainerDied","Data":"32a62b7d759b3ce4be6f0399ff84f7e9c89e4476a0c159ccb0895d43ee5a4ffe"} Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.188173 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a62b7d759b3ce4be6f0399ff84f7e9c89e4476a0c159ccb0895d43ee5a4ffe" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.188233 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kfcmb" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.191159 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8scdh" event={"ID":"9b7300e5-e28d-4690-b05f-1cfb0c034c71","Type":"ContainerDied","Data":"560c8e379e46998c102d1dc97339668ceaa951583229801ad35ae28e09901f36"} Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.191223 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="560c8e379e46998c102d1dc97339668ceaa951583229801ad35ae28e09901f36" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.191170 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8scdh" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.419802 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.421971 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-log" containerID="cri-o://40014c145259aa5d0d53ce81504b0a95f5a7c5981fc26df4df0e2b27a5cfe1fe" gracePeriod=30 Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.424515 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-httpd" containerID="cri-o://9124ee3b4fec4679b2565ead7eeef31d586524dda44fbf7a4b3655c10fedd22f" gracePeriod=30 Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.530938 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.666975 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77a06e0c-0060-42b6-a09c-a805ea7b2878-operator-scripts\") pod \"77a06e0c-0060-42b6-a09c-a805ea7b2878\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.667489 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5pqx\" (UniqueName: \"kubernetes.io/projected/77a06e0c-0060-42b6-a09c-a805ea7b2878-kube-api-access-m5pqx\") pod \"77a06e0c-0060-42b6-a09c-a805ea7b2878\" (UID: \"77a06e0c-0060-42b6-a09c-a805ea7b2878\") " Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.670559 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77a06e0c-0060-42b6-a09c-a805ea7b2878-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77a06e0c-0060-42b6-a09c-a805ea7b2878" (UID: "77a06e0c-0060-42b6-a09c-a805ea7b2878"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.673271 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77a06e0c-0060-42b6-a09c-a805ea7b2878-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.675739 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a06e0c-0060-42b6-a09c-a805ea7b2878-kube-api-access-m5pqx" (OuterVolumeSpecName: "kube-api-access-m5pqx") pod "77a06e0c-0060-42b6-a09c-a805ea7b2878" (UID: "77a06e0c-0060-42b6-a09c-a805ea7b2878"). InnerVolumeSpecName "kube-api-access-m5pqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.780402 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5pqx\" (UniqueName: \"kubernetes.io/projected/77a06e0c-0060-42b6-a09c-a805ea7b2878-kube-api-access-m5pqx\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.826617 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.882281 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pz8h\" (UniqueName: \"kubernetes.io/projected/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-kube-api-access-7pz8h\") pod \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.882716 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-operator-scripts\") pod \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\" (UID: \"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5\") " Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.884528 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbf975c1-5014-40e8-adfe-55d5d0c5bdd5" (UID: "cbf975c1-5014-40e8-adfe-55d5d0c5bdd5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.885139 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.895814 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-kube-api-access-7pz8h" (OuterVolumeSpecName: "kube-api-access-7pz8h") pod "cbf975c1-5014-40e8-adfe-55d5d0c5bdd5" (UID: "cbf975c1-5014-40e8-adfe-55d5d0c5bdd5"). InnerVolumeSpecName "kube-api-access-7pz8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.947885 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:19 crc kubenswrapper[4982]: I0123 08:18:19.998223 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pz8h\" (UniqueName: \"kubernetes.io/projected/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5-kube-api-access-7pz8h\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.099840 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e66ea6-0018-46c2-9c01-70f84bc2f166-operator-scripts\") pod \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.099912 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgbjs\" (UniqueName: \"kubernetes.io/projected/b8e66ea6-0018-46c2-9c01-70f84bc2f166-kube-api-access-vgbjs\") pod \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\" (UID: \"b8e66ea6-0018-46c2-9c01-70f84bc2f166\") " Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.100386 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e66ea6-0018-46c2-9c01-70f84bc2f166-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b8e66ea6-0018-46c2-9c01-70f84bc2f166" (UID: "b8e66ea6-0018-46c2-9c01-70f84bc2f166"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.101090 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8e66ea6-0018-46c2-9c01-70f84bc2f166-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.103564 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e66ea6-0018-46c2-9c01-70f84bc2f166-kube-api-access-vgbjs" (OuterVolumeSpecName: "kube-api-access-vgbjs") pod "b8e66ea6-0018-46c2-9c01-70f84bc2f166" (UID: "b8e66ea6-0018-46c2-9c01-70f84bc2f166"). InnerVolumeSpecName "kube-api-access-vgbjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.203140 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgbjs\" (UniqueName: \"kubernetes.io/projected/b8e66ea6-0018-46c2-9c01-70f84bc2f166-kube-api-access-vgbjs\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.213021 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jlbt" event={"ID":"b12bf1c3-1f7c-4ac1-9af2-748091012432","Type":"ContainerStarted","Data":"cd022cf6d2f151ed4941876ba6179f26a1aefe0b107ecc8c503849f1efa7e2bd"} Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.218966 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" event={"ID":"cbf975c1-5014-40e8-adfe-55d5d0c5bdd5","Type":"ContainerDied","Data":"3a683b7f9db6bb7e6d08ce2a179d8407723e4de8134f9b44bae0225d3c0f288d"} Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.219014 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a683b7f9db6bb7e6d08ce2a179d8407723e4de8134f9b44bae0225d3c0f288d" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.219069 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-83ff-account-create-update-9nrqn" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.222329 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.222328 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b9da-account-create-update-f8dpr" event={"ID":"77a06e0c-0060-42b6-a09c-a805ea7b2878","Type":"ContainerDied","Data":"da51018a2e017f12da4526a4edfb40900ddba19b29f5a0954674293ba0c1c561"} Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.222798 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da51018a2e017f12da4526a4edfb40900ddba19b29f5a0954674293ba0c1c561" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.224156 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6759-account-create-update-rmsm9" event={"ID":"b8e66ea6-0018-46c2-9c01-70f84bc2f166","Type":"ContainerDied","Data":"e9952969a6b4b208b9ed22d1c44c2bc462f28e6eff8424d9ca647321f8a239b1"} Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.224185 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9952969a6b4b208b9ed22d1c44c2bc462f28e6eff8424d9ca647321f8a239b1" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.224270 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6759-account-create-update-rmsm9" Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.230612 4982 generic.go:334] "Generic (PLEG): container finished" podID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerID="40014c145259aa5d0d53ce81504b0a95f5a7c5981fc26df4df0e2b27a5cfe1fe" exitCode=143 Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.230688 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61395d90-ce5a-44a7-9e48-f6d151b17774","Type":"ContainerDied","Data":"40014c145259aa5d0d53ce81504b0a95f5a7c5981fc26df4df0e2b27a5cfe1fe"} Jan 23 08:18:20 crc kubenswrapper[4982]: I0123 08:18:20.250316 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7jlbt" podStartSLOduration=3.845864603 podStartE2EDuration="8.250296484s" podCreationTimestamp="2026-01-23 08:18:12 +0000 UTC" firstStartedPulling="2026-01-23 08:18:14.961322615 +0000 UTC m=+1369.441686001" lastFinishedPulling="2026-01-23 08:18:19.365754496 +0000 UTC m=+1373.846117882" observedRunningTime="2026-01-23 08:18:20.233004643 +0000 UTC m=+1374.713368059" watchObservedRunningTime="2026-01-23 08:18:20.250296484 +0000 UTC m=+1374.730659880" Jan 23 08:18:23 crc kubenswrapper[4982]: I0123 08:18:23.126039 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:23 crc kubenswrapper[4982]: I0123 08:18:23.126697 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.167062 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.187399 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7jlbt" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="registry-server" probeResult="failure" output=< Jan 23 08:18:24 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:18:24 crc kubenswrapper[4982]: > Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.279083 4982 generic.go:334] "Generic (PLEG): container finished" podID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerID="9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a" exitCode=0 Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.279171 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.279157 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerDied","Data":"9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a"} Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.279249 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7eb1d299-9c2c-429a-b19d-988f037af7fb","Type":"ContainerDied","Data":"7ef4bbab4a6e91b1f9e026e81f96851c057904ca228f13e9dc1f378111b16d60"} Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.279278 4982 scope.go:117] "RemoveContainer" containerID="f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.283670 4982 generic.go:334] "Generic (PLEG): container finished" podID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerID="9124ee3b4fec4679b2565ead7eeef31d586524dda44fbf7a4b3655c10fedd22f" exitCode=0 Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.283713 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61395d90-ce5a-44a7-9e48-f6d151b17774","Type":"ContainerDied","Data":"9124ee3b4fec4679b2565ead7eeef31d586524dda44fbf7a4b3655c10fedd22f"} Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.283747 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61395d90-ce5a-44a7-9e48-f6d151b17774","Type":"ContainerDied","Data":"db0c504d4deb287b237b7148c270933195291a10ae23557897a050eec742c350"} Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.283761 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db0c504d4deb287b237b7148c270933195291a10ae23557897a050eec742c350" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.297740 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.309010 4982 scope.go:117] "RemoveContainer" containerID="ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.311397 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-config-data\") pod \"7eb1d299-9c2c-429a-b19d-988f037af7fb\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.312192 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6fq4\" (UniqueName: \"kubernetes.io/projected/7eb1d299-9c2c-429a-b19d-988f037af7fb-kube-api-access-s6fq4\") pod \"7eb1d299-9c2c-429a-b19d-988f037af7fb\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.312284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-combined-ca-bundle\") pod \"7eb1d299-9c2c-429a-b19d-988f037af7fb\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.312381 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-run-httpd\") pod \"7eb1d299-9c2c-429a-b19d-988f037af7fb\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.312459 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-log-httpd\") pod \"7eb1d299-9c2c-429a-b19d-988f037af7fb\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.312538 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-scripts\") pod \"7eb1d299-9c2c-429a-b19d-988f037af7fb\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.312960 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-sg-core-conf-yaml\") pod \"7eb1d299-9c2c-429a-b19d-988f037af7fb\" (UID: \"7eb1d299-9c2c-429a-b19d-988f037af7fb\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.320945 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7eb1d299-9c2c-429a-b19d-988f037af7fb" (UID: "7eb1d299-9c2c-429a-b19d-988f037af7fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.321347 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7eb1d299-9c2c-429a-b19d-988f037af7fb" (UID: "7eb1d299-9c2c-429a-b19d-988f037af7fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.321513 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-scripts" (OuterVolumeSpecName: "scripts") pod "7eb1d299-9c2c-429a-b19d-988f037af7fb" (UID: "7eb1d299-9c2c-429a-b19d-988f037af7fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.345156 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.345197 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7eb1d299-9c2c-429a-b19d-988f037af7fb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.345207 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.365476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb1d299-9c2c-429a-b19d-988f037af7fb-kube-api-access-s6fq4" (OuterVolumeSpecName: "kube-api-access-s6fq4") pod "7eb1d299-9c2c-429a-b19d-988f037af7fb" (UID: "7eb1d299-9c2c-429a-b19d-988f037af7fb"). InnerVolumeSpecName "kube-api-access-s6fq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.373033 4982 scope.go:117] "RemoveContainer" containerID="cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.415350 4982 scope.go:117] "RemoveContainer" containerID="9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.418524 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7eb1d299-9c2c-429a-b19d-988f037af7fb" (UID: "7eb1d299-9c2c-429a-b19d-988f037af7fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.446565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4p64\" (UniqueName: \"kubernetes.io/projected/61395d90-ce5a-44a7-9e48-f6d151b17774-kube-api-access-d4p64\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.446662 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.446947 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-scripts\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.446995 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-combined-ca-bundle\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.447037 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-public-tls-certs\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.447097 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-logs\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.447133 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-httpd-run\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.447192 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-config-data\") pod \"61395d90-ce5a-44a7-9e48-f6d151b17774\" (UID: \"61395d90-ce5a-44a7-9e48-f6d151b17774\") " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.447618 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-logs" (OuterVolumeSpecName: "logs") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.448197 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.448213 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.448225 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6fq4\" (UniqueName: \"kubernetes.io/projected/7eb1d299-9c2c-429a-b19d-988f037af7fb-kube-api-access-s6fq4\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.449384 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.451547 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.456135 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61395d90-ce5a-44a7-9e48-f6d151b17774-kube-api-access-d4p64" (OuterVolumeSpecName: "kube-api-access-d4p64") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "kube-api-access-d4p64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.457493 4982 scope.go:117] "RemoveContainer" containerID="f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.457847 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-scripts" (OuterVolumeSpecName: "scripts") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.458777 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba\": container with ID starting with f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba not found: ID does not exist" containerID="f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.458836 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba"} err="failed to get container status \"f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba\": rpc error: code = NotFound desc = could not find container \"f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba\": container with ID starting with f90223c4e195ca24d980d85b861f4a0bd18b3621fbec87a805e5cef0b9ce20ba not found: ID does not exist" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.458881 4982 scope.go:117] "RemoveContainer" containerID="ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.459550 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5\": container with ID starting with ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5 not found: ID does not exist" containerID="ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.459572 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5"} err="failed to get container status \"ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5\": rpc error: code = NotFound desc = could not find container \"ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5\": container with ID starting with ee229d9c5ceed4f69356d9d06b2b34440e42b397f9732127dd997f906ef7e0d5 not found: ID does not exist" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.459588 4982 scope.go:117] "RemoveContainer" containerID="cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.460142 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e\": container with ID starting with cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e not found: ID does not exist" containerID="cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.460175 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e"} err="failed to get container status \"cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e\": rpc error: code = NotFound desc = could not find container \"cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e\": container with ID starting with cc5407af14ebfb0c282cac8b2e4fd620380f0282c3f5424ca8eb630be957968e not found: ID does not exist" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.460189 4982 scope.go:117] "RemoveContainer" containerID="9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.460382 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a\": container with ID starting with 9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a not found: ID does not exist" containerID="9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.460402 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a"} err="failed to get container status \"9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a\": rpc error: code = NotFound desc = could not find container \"9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a\": container with ID starting with 9f6cc615337a3000d1b4cfb545f449675022a576b63e167bdca5e3ee8f49524a not found: ID does not exist" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.491755 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.513134 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eb1d299-9c2c-429a-b19d-988f037af7fb" (UID: "7eb1d299-9c2c-429a-b19d-988f037af7fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.522781 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-config-data" (OuterVolumeSpecName: "config-data") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.538713 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61395d90-ce5a-44a7-9e48-f6d151b17774" (UID: "61395d90-ce5a-44a7-9e48-f6d151b17774"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.543965 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-config-data" (OuterVolumeSpecName: "config-data") pod "7eb1d299-9c2c-429a-b19d-988f037af7fb" (UID: "7eb1d299-9c2c-429a-b19d-988f037af7fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551123 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4p64\" (UniqueName: \"kubernetes.io/projected/61395d90-ce5a-44a7-9e48-f6d151b17774-kube-api-access-d4p64\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551254 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551279 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551291 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551304 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551313 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551323 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61395d90-ce5a-44a7-9e48-f6d151b17774-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551331 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61395d90-ce5a-44a7-9e48-f6d151b17774-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.551341 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb1d299-9c2c-429a-b19d-988f037af7fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.587528 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.626838 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.641438 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.653432 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655034 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655683 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="proxy-httpd" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655708 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="proxy-httpd" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655731 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-notification-agent" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655740 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-notification-agent" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655759 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-httpd" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655766 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-httpd" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655789 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e338b64b-08c9-4c05-b50d-4d476391ef43" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655797 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e338b64b-08c9-4c05-b50d-4d476391ef43" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655809 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a06e0c-0060-42b6-a09c-a805ea7b2878" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655817 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a06e0c-0060-42b6-a09c-a805ea7b2878" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655829 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-central-agent" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655837 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-central-agent" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655852 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="sg-core" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655860 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="sg-core" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655870 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-log" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655877 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-log" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655895 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b061f3a-644c-426d-8c3f-d53f664a2703" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655903 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b061f3a-644c-426d-8c3f-d53f664a2703" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655917 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7300e5-e28d-4690-b05f-1cfb0c034c71" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655924 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7300e5-e28d-4690-b05f-1cfb0c034c71" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655952 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e66ea6-0018-46c2-9c01-70f84bc2f166" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655960 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e66ea6-0018-46c2-9c01-70f84bc2f166" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: E0123 08:18:24.655970 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf975c1-5014-40e8-adfe-55d5d0c5bdd5" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.655978 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf975c1-5014-40e8-adfe-55d5d0c5bdd5" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656231 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-central-agent" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656244 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a06e0c-0060-42b6-a09c-a805ea7b2878" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656260 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7300e5-e28d-4690-b05f-1cfb0c034c71" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656278 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="proxy-httpd" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656292 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="ceilometer-notification-agent" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656303 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-log" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656314 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf975c1-5014-40e8-adfe-55d5d0c5bdd5" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656326 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e66ea6-0018-46c2-9c01-70f84bc2f166" containerName="mariadb-account-create-update" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656343 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" containerName="sg-core" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656358 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b061f3a-644c-426d-8c3f-d53f664a2703" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656367 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" containerName="glance-httpd" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.656379 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e338b64b-08c9-4c05-b50d-4d476391ef43" containerName="mariadb-database-create" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.658696 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.663520 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.667267 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.676654 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.755898 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-run-httpd\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.755965 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-log-httpd\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.756001 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swlf\" (UniqueName: \"kubernetes.io/projected/33c02337-d0e4-4d11-b328-77ba4bbfa614-kube-api-access-5swlf\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.756075 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-config-data\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.756214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-scripts\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.756236 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.756509 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.859255 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-scripts\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.859310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.859377 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.859447 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-run-httpd\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.859480 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-log-httpd\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.859512 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swlf\" (UniqueName: \"kubernetes.io/projected/33c02337-d0e4-4d11-b328-77ba4bbfa614-kube-api-access-5swlf\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.859558 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-config-data\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.860153 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-run-httpd\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.860359 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-log-httpd\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.863669 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.863858 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-scripts\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.863988 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.864207 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-config-data\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.882308 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swlf\" (UniqueName: \"kubernetes.io/projected/33c02337-d0e4-4d11-b328-77ba4bbfa614-kube-api-access-5swlf\") pod \"ceilometer-0\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " pod="openstack/ceilometer-0" Jan 23 08:18:24 crc kubenswrapper[4982]: I0123 08:18:24.977094 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.110226 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ppn57"] Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.112298 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.116881 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bgxwc" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.117069 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.117175 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.139163 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ppn57"] Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.275666 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf27w\" (UniqueName: \"kubernetes.io/projected/c55f5436-8054-4e06-9fc7-a43a1700ebd3-kube-api-access-pf27w\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.275994 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-scripts\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.276024 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.276185 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-config-data\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.308120 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.375853 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.380867 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-config-data\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.381038 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf27w\" (UniqueName: \"kubernetes.io/projected/c55f5436-8054-4e06-9fc7-a43a1700ebd3-kube-api-access-pf27w\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.381192 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-scripts\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.381237 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.395350 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-scripts\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.395791 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-config-data\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.397535 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.404370 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf27w\" (UniqueName: \"kubernetes.io/projected/c55f5436-8054-4e06-9fc7-a43a1700ebd3-kube-api-access-pf27w\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.408321 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.409553 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ppn57\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.410719 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.413785 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.413828 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.418126 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.502538 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.594828 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.594939 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.594989 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-logs\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.595039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.595182 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.595232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.595263 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk7tf\" (UniqueName: \"kubernetes.io/projected/75a96cf7-6cad-496c-9aa6-523fce7af3ff-kube-api-access-qk7tf\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.595301 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.671411 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.671465 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.697939 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721300 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721411 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721441 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk7tf\" (UniqueName: \"kubernetes.io/projected/75a96cf7-6cad-496c-9aa6-523fce7af3ff-kube-api-access-qk7tf\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721487 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721652 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721755 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721805 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-logs\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.721851 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.724388 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.728700 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.729776 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.731189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-logs\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.739420 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.743018 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-config-data\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.748783 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk7tf\" (UniqueName: \"kubernetes.io/projected/75a96cf7-6cad-496c-9aa6-523fce7af3ff-kube-api-access-qk7tf\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.752191 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.754797 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.755480 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-scripts\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.842159 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " pod="openstack/glance-default-external-api-0" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.879924 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61395d90-ce5a-44a7-9e48-f6d151b17774" path="/var/lib/kubelet/pods/61395d90-ce5a-44a7-9e48-f6d151b17774/volumes" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.880838 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb1d299-9c2c-429a-b19d-988f037af7fb" path="/var/lib/kubelet/pods/7eb1d299-9c2c-429a-b19d-988f037af7fb/volumes" Jan 23 08:18:25 crc kubenswrapper[4982]: I0123 08:18:25.970902 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ppn57"] Jan 23 08:18:26 crc kubenswrapper[4982]: I0123 08:18:26.107843 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:18:26 crc kubenswrapper[4982]: I0123 08:18:26.329098 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerStarted","Data":"0df17f475fdd6819714ddb60fd35a99af9d23cd3babaad65bf228591b6e2c649"} Jan 23 08:18:26 crc kubenswrapper[4982]: I0123 08:18:26.332479 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ppn57" event={"ID":"c55f5436-8054-4e06-9fc7-a43a1700ebd3","Type":"ContainerStarted","Data":"d053a2c1f6c1927f8d80c864ef729d1c5fd7cff616708f2e260b0109f1bed346"} Jan 23 08:18:26 crc kubenswrapper[4982]: I0123 08:18:26.332795 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:26 crc kubenswrapper[4982]: I0123 08:18:26.332888 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:26 crc kubenswrapper[4982]: W0123 08:18:26.918032 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75a96cf7_6cad_496c_9aa6_523fce7af3ff.slice/crio-0729eb8ee9dbced3b30dc5842096d3e6b61158bcb468ed8724b6c9f1b68a612f WatchSource:0}: Error finding container 0729eb8ee9dbced3b30dc5842096d3e6b61158bcb468ed8724b6c9f1b68a612f: Status 404 returned error can't find the container with id 0729eb8ee9dbced3b30dc5842096d3e6b61158bcb468ed8724b6c9f1b68a612f Jan 23 08:18:26 crc kubenswrapper[4982]: I0123 08:18:26.938957 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:18:27 crc kubenswrapper[4982]: I0123 08:18:27.363858 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerStarted","Data":"b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786"} Jan 23 08:18:27 crc kubenswrapper[4982]: I0123 08:18:27.368759 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a96cf7-6cad-496c-9aa6-523fce7af3ff","Type":"ContainerStarted","Data":"0729eb8ee9dbced3b30dc5842096d3e6b61158bcb468ed8724b6c9f1b68a612f"} Jan 23 08:18:27 crc kubenswrapper[4982]: I0123 08:18:27.729336 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:29 crc kubenswrapper[4982]: I0123 08:18:29.012345 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:29 crc kubenswrapper[4982]: I0123 08:18:29.013981 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:18:29 crc kubenswrapper[4982]: I0123 08:18:29.023707 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 08:18:29 crc kubenswrapper[4982]: I0123 08:18:29.403279 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a96cf7-6cad-496c-9aa6-523fce7af3ff","Type":"ContainerStarted","Data":"2ae97d13aafab9b78c55d20b89d369229f390d7aed5aea78a7522b15b9f48310"} Jan 23 08:18:30 crc kubenswrapper[4982]: I0123 08:18:30.420469 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a96cf7-6cad-496c-9aa6-523fce7af3ff","Type":"ContainerStarted","Data":"f85206e6ea38cc3b96a676eb48cd6ab7fe8f14acdcd1691d0be85962d1d2a103"} Jan 23 08:18:30 crc kubenswrapper[4982]: I0123 08:18:30.454561 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.454534205 podStartE2EDuration="5.454534205s" podCreationTimestamp="2026-01-23 08:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:30.443251704 +0000 UTC m=+1384.923615100" watchObservedRunningTime="2026-01-23 08:18:30.454534205 +0000 UTC m=+1384.934897591" Jan 23 08:18:31 crc kubenswrapper[4982]: I0123 08:18:31.432608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerStarted","Data":"8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c"} Jan 23 08:18:33 crc kubenswrapper[4982]: I0123 08:18:33.190486 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:33 crc kubenswrapper[4982]: I0123 08:18:33.235770 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:33 crc kubenswrapper[4982]: I0123 08:18:33.436727 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jlbt"] Jan 23 08:18:34 crc kubenswrapper[4982]: I0123 08:18:34.485983 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7jlbt" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="registry-server" containerID="cri-o://cd022cf6d2f151ed4941876ba6179f26a1aefe0b107ecc8c503849f1efa7e2bd" gracePeriod=2 Jan 23 08:18:35 crc kubenswrapper[4982]: I0123 08:18:35.500083 4982 generic.go:334] "Generic (PLEG): container finished" podID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerID="cd022cf6d2f151ed4941876ba6179f26a1aefe0b107ecc8c503849f1efa7e2bd" exitCode=0 Jan 23 08:18:35 crc kubenswrapper[4982]: I0123 08:18:35.500175 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jlbt" event={"ID":"b12bf1c3-1f7c-4ac1-9af2-748091012432","Type":"ContainerDied","Data":"cd022cf6d2f151ed4941876ba6179f26a1aefe0b107ecc8c503849f1efa7e2bd"} Jan 23 08:18:36 crc kubenswrapper[4982]: I0123 08:18:36.108482 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 08:18:36 crc kubenswrapper[4982]: I0123 08:18:36.108565 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 08:18:36 crc kubenswrapper[4982]: I0123 08:18:36.145719 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 08:18:36 crc kubenswrapper[4982]: I0123 08:18:36.158408 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 08:18:36 crc kubenswrapper[4982]: I0123 08:18:36.512276 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 08:18:36 crc kubenswrapper[4982]: I0123 08:18:36.512337 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.235899 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.384099 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c45w4\" (UniqueName: \"kubernetes.io/projected/b12bf1c3-1f7c-4ac1-9af2-748091012432-kube-api-access-c45w4\") pod \"b12bf1c3-1f7c-4ac1-9af2-748091012432\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.385167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-catalog-content\") pod \"b12bf1c3-1f7c-4ac1-9af2-748091012432\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.385428 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-utilities\") pod \"b12bf1c3-1f7c-4ac1-9af2-748091012432\" (UID: \"b12bf1c3-1f7c-4ac1-9af2-748091012432\") " Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.386150 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-utilities" (OuterVolumeSpecName: "utilities") pod "b12bf1c3-1f7c-4ac1-9af2-748091012432" (UID: "b12bf1c3-1f7c-4ac1-9af2-748091012432"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.402490 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12bf1c3-1f7c-4ac1-9af2-748091012432-kube-api-access-c45w4" (OuterVolumeSpecName: "kube-api-access-c45w4") pod "b12bf1c3-1f7c-4ac1-9af2-748091012432" (UID: "b12bf1c3-1f7c-4ac1-9af2-748091012432"). InnerVolumeSpecName "kube-api-access-c45w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.488586 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c45w4\" (UniqueName: \"kubernetes.io/projected/b12bf1c3-1f7c-4ac1-9af2-748091012432-kube-api-access-c45w4\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.488647 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.501978 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b12bf1c3-1f7c-4ac1-9af2-748091012432" (UID: "b12bf1c3-1f7c-4ac1-9af2-748091012432"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.525539 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ppn57" event={"ID":"c55f5436-8054-4e06-9fc7-a43a1700ebd3","Type":"ContainerStarted","Data":"12f0cebe9233222bfedaa257c3731f1a120a201d5d813c1562bc768ebb34db66"} Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.532852 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerStarted","Data":"6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077"} Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.542210 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jlbt" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.542594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jlbt" event={"ID":"b12bf1c3-1f7c-4ac1-9af2-748091012432","Type":"ContainerDied","Data":"c18b11202992711cbabe99c7eaba90eb87df1907f8378500edf9e230513fc6b5"} Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.542770 4982 scope.go:117] "RemoveContainer" containerID="cd022cf6d2f151ed4941876ba6179f26a1aefe0b107ecc8c503849f1efa7e2bd" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.557436 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ppn57" podStartSLOduration=1.639212973 podStartE2EDuration="12.557413477s" podCreationTimestamp="2026-01-23 08:18:25 +0000 UTC" firstStartedPulling="2026-01-23 08:18:25.960805599 +0000 UTC m=+1380.441168995" lastFinishedPulling="2026-01-23 08:18:36.879006113 +0000 UTC m=+1391.359369499" observedRunningTime="2026-01-23 08:18:37.552330691 +0000 UTC m=+1392.032694067" watchObservedRunningTime="2026-01-23 08:18:37.557413477 +0000 UTC m=+1392.037776863" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.587882 4982 scope.go:117] "RemoveContainer" containerID="d7ea125c2290092c8acff44f21a32abe42c5b61462b240a3cd29720011179b31" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.591759 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b12bf1c3-1f7c-4ac1-9af2-748091012432-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.618996 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jlbt"] Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.644566 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7jlbt"] Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.648475 4982 scope.go:117] "RemoveContainer" containerID="2d66b559ce3dd498f332e2950e63d2b4e69d1a2f733bc6a21bb0158af3a2813b" Jan 23 08:18:37 crc kubenswrapper[4982]: I0123 08:18:37.861998 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" path="/var/lib/kubelet/pods/b12bf1c3-1f7c-4ac1-9af2-748091012432/volumes" Jan 23 08:18:39 crc kubenswrapper[4982]: I0123 08:18:39.615618 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 08:18:39 crc kubenswrapper[4982]: I0123 08:18:39.616904 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:18:39 crc kubenswrapper[4982]: I0123 08:18:39.620276 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 08:18:40 crc kubenswrapper[4982]: I0123 08:18:40.574132 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerStarted","Data":"044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79"} Jan 23 08:18:40 crc kubenswrapper[4982]: I0123 08:18:40.574436 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="proxy-httpd" containerID="cri-o://044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79" gracePeriod=30 Jan 23 08:18:40 crc kubenswrapper[4982]: I0123 08:18:40.574460 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="sg-core" containerID="cri-o://6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077" gracePeriod=30 Jan 23 08:18:40 crc kubenswrapper[4982]: I0123 08:18:40.574469 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-notification-agent" containerID="cri-o://8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c" gracePeriod=30 Jan 23 08:18:40 crc kubenswrapper[4982]: I0123 08:18:40.574346 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-central-agent" containerID="cri-o://b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786" gracePeriod=30 Jan 23 08:18:40 crc kubenswrapper[4982]: I0123 08:18:40.604920 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9383248 podStartE2EDuration="16.604894717s" podCreationTimestamp="2026-01-23 08:18:24 +0000 UTC" firstStartedPulling="2026-01-23 08:18:25.702292206 +0000 UTC m=+1380.182655592" lastFinishedPulling="2026-01-23 08:18:39.368862113 +0000 UTC m=+1393.849225509" observedRunningTime="2026-01-23 08:18:40.601980019 +0000 UTC m=+1395.082343405" watchObservedRunningTime="2026-01-23 08:18:40.604894717 +0000 UTC m=+1395.085258103" Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.436065 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.436151 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.604117 4982 generic.go:334] "Generic (PLEG): container finished" podID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerID="044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79" exitCode=0 Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.604154 4982 generic.go:334] "Generic (PLEG): container finished" podID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerID="6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077" exitCode=2 Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.604161 4982 generic.go:334] "Generic (PLEG): container finished" podID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerID="b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786" exitCode=0 Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.604207 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerDied","Data":"044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79"} Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.604265 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerDied","Data":"6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077"} Jan 23 08:18:41 crc kubenswrapper[4982]: I0123 08:18:41.604276 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerDied","Data":"b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786"} Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.041588 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.187972 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-run-httpd\") pod \"33c02337-d0e4-4d11-b328-77ba4bbfa614\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.188507 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-combined-ca-bundle\") pod \"33c02337-d0e4-4d11-b328-77ba4bbfa614\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.188569 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-scripts\") pod \"33c02337-d0e4-4d11-b328-77ba4bbfa614\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.188564 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33c02337-d0e4-4d11-b328-77ba4bbfa614" (UID: "33c02337-d0e4-4d11-b328-77ba4bbfa614"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.188616 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5swlf\" (UniqueName: \"kubernetes.io/projected/33c02337-d0e4-4d11-b328-77ba4bbfa614-kube-api-access-5swlf\") pod \"33c02337-d0e4-4d11-b328-77ba4bbfa614\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.188686 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-config-data\") pod \"33c02337-d0e4-4d11-b328-77ba4bbfa614\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.188778 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-log-httpd\") pod \"33c02337-d0e4-4d11-b328-77ba4bbfa614\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.188901 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-sg-core-conf-yaml\") pod \"33c02337-d0e4-4d11-b328-77ba4bbfa614\" (UID: \"33c02337-d0e4-4d11-b328-77ba4bbfa614\") " Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.189548 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.189640 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33c02337-d0e4-4d11-b328-77ba4bbfa614" (UID: "33c02337-d0e4-4d11-b328-77ba4bbfa614"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.193916 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c02337-d0e4-4d11-b328-77ba4bbfa614-kube-api-access-5swlf" (OuterVolumeSpecName: "kube-api-access-5swlf") pod "33c02337-d0e4-4d11-b328-77ba4bbfa614" (UID: "33c02337-d0e4-4d11-b328-77ba4bbfa614"). InnerVolumeSpecName "kube-api-access-5swlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.194544 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-scripts" (OuterVolumeSpecName: "scripts") pod "33c02337-d0e4-4d11-b328-77ba4bbfa614" (UID: "33c02337-d0e4-4d11-b328-77ba4bbfa614"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.219112 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33c02337-d0e4-4d11-b328-77ba4bbfa614" (UID: "33c02337-d0e4-4d11-b328-77ba4bbfa614"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.278449 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33c02337-d0e4-4d11-b328-77ba4bbfa614" (UID: "33c02337-d0e4-4d11-b328-77ba4bbfa614"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.291682 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.291712 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.291724 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5swlf\" (UniqueName: \"kubernetes.io/projected/33c02337-d0e4-4d11-b328-77ba4bbfa614-kube-api-access-5swlf\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.291737 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33c02337-d0e4-4d11-b328-77ba4bbfa614-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.291747 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.298658 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-config-data" (OuterVolumeSpecName: "config-data") pod "33c02337-d0e4-4d11-b328-77ba4bbfa614" (UID: "33c02337-d0e4-4d11-b328-77ba4bbfa614"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.393564 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33c02337-d0e4-4d11-b328-77ba4bbfa614-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.614967 4982 generic.go:334] "Generic (PLEG): container finished" podID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerID="8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c" exitCode=0 Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.615015 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerDied","Data":"8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c"} Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.615048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33c02337-d0e4-4d11-b328-77ba4bbfa614","Type":"ContainerDied","Data":"0df17f475fdd6819714ddb60fd35a99af9d23cd3babaad65bf228591b6e2c649"} Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.615086 4982 scope.go:117] "RemoveContainer" containerID="044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.615253 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.637520 4982 scope.go:117] "RemoveContainer" containerID="6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.657225 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.670261 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.684799 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.685310 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="extract-utilities" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685337 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="extract-utilities" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.685364 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="extract-content" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685374 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="extract-content" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.685390 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-notification-agent" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685399 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-notification-agent" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.685414 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="registry-server" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685420 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="registry-server" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.685438 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="sg-core" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685443 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="sg-core" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.685455 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="proxy-httpd" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685461 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="proxy-httpd" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.685477 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-central-agent" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685484 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-central-agent" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685636 4982 scope.go:117] "RemoveContainer" containerID="8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685704 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="proxy-httpd" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685722 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-notification-agent" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685737 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="sg-core" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685753 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12bf1c3-1f7c-4ac1-9af2-748091012432" containerName="registry-server" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.685763 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" containerName="ceilometer-central-agent" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.687774 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.690045 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.691062 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.696203 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.752046 4982 scope.go:117] "RemoveContainer" containerID="b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.811709 4982 scope.go:117] "RemoveContainer" containerID="044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.812144 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79\": container with ID starting with 044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79 not found: ID does not exist" containerID="044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.812173 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79"} err="failed to get container status \"044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79\": rpc error: code = NotFound desc = could not find container \"044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79\": container with ID starting with 044ae2fb3e3253aaa735cda24caf07ad8698551ffcfe7573402261c71dab4d79 not found: ID does not exist" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.812195 4982 scope.go:117] "RemoveContainer" containerID="6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.813476 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077\": container with ID starting with 6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077 not found: ID does not exist" containerID="6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.813503 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077"} err="failed to get container status \"6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077\": rpc error: code = NotFound desc = could not find container \"6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077\": container with ID starting with 6d43890adf64b8f6e371669dfb4d384dbffbbf4e2f290857adaf0c57c4c89077 not found: ID does not exist" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.813522 4982 scope.go:117] "RemoveContainer" containerID="8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.813861 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c\": container with ID starting with 8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c not found: ID does not exist" containerID="8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.813886 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c"} err="failed to get container status \"8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c\": rpc error: code = NotFound desc = could not find container \"8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c\": container with ID starting with 8f5d887246bc76347312397854c54a267bdeb846f309e6659fed7903190f906c not found: ID does not exist" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.813906 4982 scope.go:117] "RemoveContainer" containerID="b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786" Jan 23 08:18:42 crc kubenswrapper[4982]: E0123 08:18:42.814218 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786\": container with ID starting with b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786 not found: ID does not exist" containerID="b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.814241 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786"} err="failed to get container status \"b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786\": rpc error: code = NotFound desc = could not find container \"b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786\": container with ID starting with b539d4262aaf2e7901159d3d97dcc603efaae2ada329483008b4b23823e71786 not found: ID does not exist" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.824496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.824592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.824714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-scripts\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.824744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mjs\" (UniqueName: \"kubernetes.io/projected/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-kube-api-access-j9mjs\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.824776 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.824793 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.824831 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-config-data\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.926239 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.926382 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-scripts\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.926903 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9mjs\" (UniqueName: \"kubernetes.io/projected/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-kube-api-access-j9mjs\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.926965 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.927074 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.927447 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.927580 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-config-data\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.927640 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.928738 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.932576 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-scripts\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.935203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-config-data\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.939267 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.949083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9mjs\" (UniqueName: \"kubernetes.io/projected/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-kube-api-access-j9mjs\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:42 crc kubenswrapper[4982]: I0123 08:18:42.954970 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " pod="openstack/ceilometer-0" Jan 23 08:18:43 crc kubenswrapper[4982]: I0123 08:18:43.101182 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:18:43 crc kubenswrapper[4982]: I0123 08:18:43.555511 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:18:43 crc kubenswrapper[4982]: I0123 08:18:43.624860 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerStarted","Data":"08ce4c038d413735c361eacb56dc4cbdcb5532781eed332406512cabc8b5913c"} Jan 23 08:18:43 crc kubenswrapper[4982]: I0123 08:18:43.846658 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33c02337-d0e4-4d11-b328-77ba4bbfa614" path="/var/lib/kubelet/pods/33c02337-d0e4-4d11-b328-77ba4bbfa614/volumes" Jan 23 08:18:44 crc kubenswrapper[4982]: I0123 08:18:44.641186 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerStarted","Data":"3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2"} Jan 23 08:18:45 crc kubenswrapper[4982]: I0123 08:18:45.660990 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerStarted","Data":"4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51"} Jan 23 08:18:46 crc kubenswrapper[4982]: I0123 08:18:46.672881 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerStarted","Data":"00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54"} Jan 23 08:18:47 crc kubenswrapper[4982]: I0123 08:18:47.684106 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerStarted","Data":"bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba"} Jan 23 08:18:47 crc kubenswrapper[4982]: I0123 08:18:47.684737 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 08:18:47 crc kubenswrapper[4982]: I0123 08:18:47.715026 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.056518068 podStartE2EDuration="5.715003582s" podCreationTimestamp="2026-01-23 08:18:42 +0000 UTC" firstStartedPulling="2026-01-23 08:18:43.562742323 +0000 UTC m=+1398.043105709" lastFinishedPulling="2026-01-23 08:18:47.221227817 +0000 UTC m=+1401.701591223" observedRunningTime="2026-01-23 08:18:47.709225447 +0000 UTC m=+1402.189588833" watchObservedRunningTime="2026-01-23 08:18:47.715003582 +0000 UTC m=+1402.195366968" Jan 23 08:18:49 crc kubenswrapper[4982]: I0123 08:18:49.703555 4982 generic.go:334] "Generic (PLEG): container finished" podID="c55f5436-8054-4e06-9fc7-a43a1700ebd3" containerID="12f0cebe9233222bfedaa257c3731f1a120a201d5d813c1562bc768ebb34db66" exitCode=0 Jan 23 08:18:49 crc kubenswrapper[4982]: I0123 08:18:49.703901 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ppn57" event={"ID":"c55f5436-8054-4e06-9fc7-a43a1700ebd3","Type":"ContainerDied","Data":"12f0cebe9233222bfedaa257c3731f1a120a201d5d813c1562bc768ebb34db66"} Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.144748 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.226035 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-combined-ca-bundle\") pod \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.226127 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-config-data\") pod \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.226194 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-scripts\") pod \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.226263 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf27w\" (UniqueName: \"kubernetes.io/projected/c55f5436-8054-4e06-9fc7-a43a1700ebd3-kube-api-access-pf27w\") pod \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\" (UID: \"c55f5436-8054-4e06-9fc7-a43a1700ebd3\") " Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.232524 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-scripts" (OuterVolumeSpecName: "scripts") pod "c55f5436-8054-4e06-9fc7-a43a1700ebd3" (UID: "c55f5436-8054-4e06-9fc7-a43a1700ebd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.232593 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55f5436-8054-4e06-9fc7-a43a1700ebd3-kube-api-access-pf27w" (OuterVolumeSpecName: "kube-api-access-pf27w") pod "c55f5436-8054-4e06-9fc7-a43a1700ebd3" (UID: "c55f5436-8054-4e06-9fc7-a43a1700ebd3"). InnerVolumeSpecName "kube-api-access-pf27w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.263421 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c55f5436-8054-4e06-9fc7-a43a1700ebd3" (UID: "c55f5436-8054-4e06-9fc7-a43a1700ebd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.267178 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-config-data" (OuterVolumeSpecName: "config-data") pod "c55f5436-8054-4e06-9fc7-a43a1700ebd3" (UID: "c55f5436-8054-4e06-9fc7-a43a1700ebd3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.330448 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.330489 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.330507 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c55f5436-8054-4e06-9fc7-a43a1700ebd3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.330518 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf27w\" (UniqueName: \"kubernetes.io/projected/c55f5436-8054-4e06-9fc7-a43a1700ebd3-kube-api-access-pf27w\") on node \"crc\" DevicePath \"\"" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.739215 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ppn57" event={"ID":"c55f5436-8054-4e06-9fc7-a43a1700ebd3","Type":"ContainerDied","Data":"d053a2c1f6c1927f8d80c864ef729d1c5fd7cff616708f2e260b0109f1bed346"} Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.739259 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d053a2c1f6c1927f8d80c864ef729d1c5fd7cff616708f2e260b0109f1bed346" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.739287 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ppn57" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.841582 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 08:18:51 crc kubenswrapper[4982]: E0123 08:18:51.841985 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55f5436-8054-4e06-9fc7-a43a1700ebd3" containerName="nova-cell0-conductor-db-sync" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.841998 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55f5436-8054-4e06-9fc7-a43a1700ebd3" containerName="nova-cell0-conductor-db-sync" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.842220 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55f5436-8054-4e06-9fc7-a43a1700ebd3" containerName="nova-cell0-conductor-db-sync" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.842921 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.845282 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.845486 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bgxwc" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.850124 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.940990 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.941038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr6rd\" (UniqueName: \"kubernetes.io/projected/271fae4a-bd02-4281-a588-2784769ed552-kube-api-access-rr6rd\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:51 crc kubenswrapper[4982]: I0123 08:18:51.941100 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.042810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.042878 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr6rd\" (UniqueName: \"kubernetes.io/projected/271fae4a-bd02-4281-a588-2784769ed552-kube-api-access-rr6rd\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.042954 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.058241 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.058529 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.061823 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr6rd\" (UniqueName: \"kubernetes.io/projected/271fae4a-bd02-4281-a588-2784769ed552-kube-api-access-rr6rd\") pod \"nova-cell0-conductor-0\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.172320 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.680489 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 08:18:52 crc kubenswrapper[4982]: I0123 08:18:52.777589 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"271fae4a-bd02-4281-a588-2784769ed552","Type":"ContainerStarted","Data":"cf1cdb5c5bd92db7ac7c59fbe9986f450464688f0f0ed2e108bc80e1cbaad779"} Jan 23 08:18:53 crc kubenswrapper[4982]: I0123 08:18:53.790144 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"271fae4a-bd02-4281-a588-2784769ed552","Type":"ContainerStarted","Data":"6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278"} Jan 23 08:18:53 crc kubenswrapper[4982]: I0123 08:18:53.791800 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:53 crc kubenswrapper[4982]: I0123 08:18:53.808294 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.808274446 podStartE2EDuration="2.808274446s" podCreationTimestamp="2026-01-23 08:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:18:53.808158993 +0000 UTC m=+1408.288522379" watchObservedRunningTime="2026-01-23 08:18:53.808274446 +0000 UTC m=+1408.288637832" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.207063 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.673197 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ppl7s"] Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.684150 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.690021 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.690909 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.698100 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ppl7s"] Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.764457 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-scripts\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.764553 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.764659 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-config-data\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.764744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj2gn\" (UniqueName: \"kubernetes.io/projected/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-kube-api-access-cj2gn\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.869349 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj2gn\" (UniqueName: \"kubernetes.io/projected/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-kube-api-access-cj2gn\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.869468 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-scripts\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.869520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.869582 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-config-data\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.895923 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-scripts\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.901289 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj2gn\" (UniqueName: \"kubernetes.io/projected/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-kube-api-access-cj2gn\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.902064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.921537 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-config-data\") pod \"nova-cell0-cell-mapping-ppl7s\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.946686 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.948513 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:18:57 crc kubenswrapper[4982]: I0123 08:18:57.953089 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.020516 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.032572 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.066593 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.068606 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077196 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1333c4cb-cead-4d53-8568-b7226b77a536-logs\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077242 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077281 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6zw\" (UniqueName: \"kubernetes.io/projected/1333c4cb-cead-4d53-8568-b7226b77a536-kube-api-access-dz6zw\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69f131d5-490a-4d53-8fa4-8c0d8801c098-logs\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077343 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-config-data\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077362 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79v6\" (UniqueName: \"kubernetes.io/projected/69f131d5-490a-4d53-8fa4-8c0d8801c098-kube-api-access-s79v6\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077417 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-config-data\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.077467 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.084617 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.119793 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.134123 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.139957 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.146722 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.152378 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.159731 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-nhbw6"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.161506 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.178856 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-nhbw6"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.182883 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-config-data\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.182921 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s79v6\" (UniqueName: \"kubernetes.io/projected/69f131d5-490a-4d53-8fa4-8c0d8801c098-kube-api-access-s79v6\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.182987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-config-data\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.183047 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.183074 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1333c4cb-cead-4d53-8568-b7226b77a536-logs\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.183096 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.183124 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6zw\" (UniqueName: \"kubernetes.io/projected/1333c4cb-cead-4d53-8568-b7226b77a536-kube-api-access-dz6zw\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.183142 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69f131d5-490a-4d53-8fa4-8c0d8801c098-logs\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.183530 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69f131d5-490a-4d53-8fa4-8c0d8801c098-logs\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.187150 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1333c4cb-cead-4d53-8568-b7226b77a536-logs\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.203908 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-config-data\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.204347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.217288 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.223951 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-config-data\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.233192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6zw\" (UniqueName: \"kubernetes.io/projected/1333c4cb-cead-4d53-8568-b7226b77a536-kube-api-access-dz6zw\") pod \"nova-metadata-0\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.245211 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79v6\" (UniqueName: \"kubernetes.io/projected/69f131d5-490a-4d53-8fa4-8c0d8801c098-kube-api-access-s79v6\") pod \"nova-api-0\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284777 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfn7\" (UniqueName: \"kubernetes.io/projected/54f8e0fa-7196-4b84-abec-cdea9bc79140-kube-api-access-drfn7\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284817 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284842 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvbn\" (UniqueName: \"kubernetes.io/projected/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-kube-api-access-wcvbn\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284867 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284910 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-config\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.284942 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.286404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.303378 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.304889 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.310110 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.326300 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.333595 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.394725 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.394817 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcvbn\" (UniqueName: \"kubernetes.io/projected/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-kube-api-access-wcvbn\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.394860 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.394986 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-config\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.395032 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.395063 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.395352 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.395383 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.395449 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfn7\" (UniqueName: \"kubernetes.io/projected/54f8e0fa-7196-4b84-abec-cdea9bc79140-kube-api-access-drfn7\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.397355 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.397387 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.399395 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.400228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.402613 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.402957 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.405776 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-config\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.414328 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfn7\" (UniqueName: \"kubernetes.io/projected/54f8e0fa-7196-4b84-abec-cdea9bc79140-kube-api-access-drfn7\") pod \"dnsmasq-dns-647df7b8c5-nhbw6\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.420734 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcvbn\" (UniqueName: \"kubernetes.io/projected/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-kube-api-access-wcvbn\") pod \"nova-cell1-novncproxy-0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.497608 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq2zl\" (UniqueName: \"kubernetes.io/projected/f484f641-834b-488f-a4fa-543ae864902d-kube-api-access-mq2zl\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.497948 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.497983 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-config-data\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.508163 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.576931 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.599434 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.601711 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq2zl\" (UniqueName: \"kubernetes.io/projected/f484f641-834b-488f-a4fa-543ae864902d-kube-api-access-mq2zl\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.601760 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.601797 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-config-data\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.609682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-config-data\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.611434 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.624111 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq2zl\" (UniqueName: \"kubernetes.io/projected/f484f641-834b-488f-a4fa-543ae864902d-kube-api-access-mq2zl\") pod \"nova-scheduler-0\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.694666 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.717339 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ppl7s"] Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.873876 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ppl7s" event={"ID":"85a6abf4-6ced-43b8-9b90-f525d6ed23a7","Type":"ContainerStarted","Data":"e727e8ab67bf5dfef056650b3e020067fd62e765f78706351cae124eb354524a"} Jan 23 08:18:58 crc kubenswrapper[4982]: I0123 08:18:58.889958 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.140099 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.218154 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:18:59 crc kubenswrapper[4982]: W0123 08:18:59.224997 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31fea2ab_c75c_4825_bb9e_7d3bcdd551e0.slice/crio-c0848f1dc9560bea99aaa4def52c87df28e5380f56dc513b11b65a9c0db8019f WatchSource:0}: Error finding container c0848f1dc9560bea99aaa4def52c87df28e5380f56dc513b11b65a9c0db8019f: Status 404 returned error can't find the container with id c0848f1dc9560bea99aaa4def52c87df28e5380f56dc513b11b65a9c0db8019f Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.328739 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-nhbw6"] Jan 23 08:18:59 crc kubenswrapper[4982]: W0123 08:18:59.339865 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54f8e0fa_7196_4b84_abec_cdea9bc79140.slice/crio-4534e07402348eb76b623ba47f0fab7725784f5f2154fafaa0e827ad349d9c84 WatchSource:0}: Error finding container 4534e07402348eb76b623ba47f0fab7725784f5f2154fafaa0e827ad349d9c84: Status 404 returned error can't find the container with id 4534e07402348eb76b623ba47f0fab7725784f5f2154fafaa0e827ad349d9c84 Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.349973 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:18:59 crc kubenswrapper[4982]: W0123 08:18:59.364311 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf484f641_834b_488f_a4fa_543ae864902d.slice/crio-2c31ccb79b9ff10c29a9f5ac8b2e1edd890a752c05de5e3a40ad22a906fa4c46 WatchSource:0}: Error finding container 2c31ccb79b9ff10c29a9f5ac8b2e1edd890a752c05de5e3a40ad22a906fa4c46: Status 404 returned error can't find the container with id 2c31ccb79b9ff10c29a9f5ac8b2e1edd890a752c05de5e3a40ad22a906fa4c46 Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.439757 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-52hsl"] Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.446235 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.448764 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-52hsl"] Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.449464 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.449764 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.538608 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-config-data\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.538709 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5sdt\" (UniqueName: \"kubernetes.io/projected/a5ab1abc-5f0e-4139-8d32-06470f033cb0-kube-api-access-f5sdt\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.538840 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.538909 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-scripts\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.641357 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-config-data\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.641694 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5sdt\" (UniqueName: \"kubernetes.io/projected/a5ab1abc-5f0e-4139-8d32-06470f033cb0-kube-api-access-f5sdt\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.641755 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.641795 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-scripts\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.647970 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-scripts\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.650227 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.652356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-config-data\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.663374 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5sdt\" (UniqueName: \"kubernetes.io/projected/a5ab1abc-5f0e-4139-8d32-06470f033cb0-kube-api-access-f5sdt\") pod \"nova-cell1-conductor-db-sync-52hsl\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.833651 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.902869 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f484f641-834b-488f-a4fa-543ae864902d","Type":"ContainerStarted","Data":"2c31ccb79b9ff10c29a9f5ac8b2e1edd890a752c05de5e3a40ad22a906fa4c46"} Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.909153 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0","Type":"ContainerStarted","Data":"c0848f1dc9560bea99aaa4def52c87df28e5380f56dc513b11b65a9c0db8019f"} Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.910210 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1333c4cb-cead-4d53-8568-b7226b77a536","Type":"ContainerStarted","Data":"cf574589b85423f5c9fa322ae6c8caf5fcd423ec2bb6e0706c3420dcc042093f"} Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.911257 4982 generic.go:334] "Generic (PLEG): container finished" podID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerID="0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d" exitCode=0 Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.911301 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" event={"ID":"54f8e0fa-7196-4b84-abec-cdea9bc79140","Type":"ContainerDied","Data":"0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d"} Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.911318 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" event={"ID":"54f8e0fa-7196-4b84-abec-cdea9bc79140","Type":"ContainerStarted","Data":"4534e07402348eb76b623ba47f0fab7725784f5f2154fafaa0e827ad349d9c84"} Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.958177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ppl7s" event={"ID":"85a6abf4-6ced-43b8-9b90-f525d6ed23a7","Type":"ContainerStarted","Data":"71edc0177ff0d5933505b059608613209efb4dbc155b1725e15071efbadbedeb"} Jan 23 08:18:59 crc kubenswrapper[4982]: I0123 08:18:59.987843 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69f131d5-490a-4d53-8fa4-8c0d8801c098","Type":"ContainerStarted","Data":"87399efcbad6d4c2c4b696186fd13ce041a984721cce3bca75fa275ce0569d1e"} Jan 23 08:19:00 crc kubenswrapper[4982]: I0123 08:19:00.075358 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ppl7s" podStartSLOduration=3.075339922 podStartE2EDuration="3.075339922s" podCreationTimestamp="2026-01-23 08:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:00.035452047 +0000 UTC m=+1414.515815433" watchObservedRunningTime="2026-01-23 08:19:00.075339922 +0000 UTC m=+1414.555703298" Jan 23 08:19:00 crc kubenswrapper[4982]: I0123 08:19:00.673610 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-52hsl"] Jan 23 08:19:01 crc kubenswrapper[4982]: I0123 08:19:01.005574 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" event={"ID":"54f8e0fa-7196-4b84-abec-cdea9bc79140","Type":"ContainerStarted","Data":"c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d"} Jan 23 08:19:01 crc kubenswrapper[4982]: I0123 08:19:01.007217 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:19:01 crc kubenswrapper[4982]: I0123 08:19:01.011306 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-52hsl" event={"ID":"a5ab1abc-5f0e-4139-8d32-06470f033cb0","Type":"ContainerStarted","Data":"a0a278ad4f37080739de8fb833e186ba676e7329e70a643b9f6c6e0f7758489d"} Jan 23 08:19:01 crc kubenswrapper[4982]: I0123 08:19:01.042805 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" podStartSLOduration=3.042781403 podStartE2EDuration="3.042781403s" podCreationTimestamp="2026-01-23 08:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:01.030533036 +0000 UTC m=+1415.510896432" watchObservedRunningTime="2026-01-23 08:19:01.042781403 +0000 UTC m=+1415.523144789" Jan 23 08:19:02 crc kubenswrapper[4982]: I0123 08:19:02.024689 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:02 crc kubenswrapper[4982]: I0123 08:19:02.026365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-52hsl" event={"ID":"a5ab1abc-5f0e-4139-8d32-06470f033cb0","Type":"ContainerStarted","Data":"6a770ba4faff5c9a5f7042939adde83760f92b60f02a6a606ff4ba387c94c058"} Jan 23 08:19:02 crc kubenswrapper[4982]: I0123 08:19:02.057502 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-52hsl" podStartSLOduration=3.057366463 podStartE2EDuration="3.057366463s" podCreationTimestamp="2026-01-23 08:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:02.0415215 +0000 UTC m=+1416.521884896" watchObservedRunningTime="2026-01-23 08:19:02.057366463 +0000 UTC m=+1416.537729859" Jan 23 08:19:02 crc kubenswrapper[4982]: I0123 08:19:02.103904 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.092589 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69f131d5-490a-4d53-8fa4-8c0d8801c098","Type":"ContainerStarted","Data":"990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d"} Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.093126 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69f131d5-490a-4d53-8fa4-8c0d8801c098","Type":"ContainerStarted","Data":"e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf"} Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.095443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f484f641-834b-488f-a4fa-543ae864902d","Type":"ContainerStarted","Data":"c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512"} Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.097106 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0","Type":"ContainerStarted","Data":"9f44f2214b8541e68400460c130ff8534316139d205c85096653b9d99f9d7ae2"} Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.097134 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9f44f2214b8541e68400460c130ff8534316139d205c85096653b9d99f9d7ae2" gracePeriod=30 Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.100732 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1333c4cb-cead-4d53-8568-b7226b77a536","Type":"ContainerStarted","Data":"4d99765eab340c99fd3cbe667f13d3fe2dc034d3e4a18fb1d551c450a76b4114"} Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.100775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1333c4cb-cead-4d53-8568-b7226b77a536","Type":"ContainerStarted","Data":"d8fafdf3e15fca5632df15734a3d3f806a09aa95390dcb713313e1dfa5473272"} Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.100949 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-log" containerID="cri-o://d8fafdf3e15fca5632df15734a3d3f806a09aa95390dcb713313e1dfa5473272" gracePeriod=30 Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.101100 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-metadata" containerID="cri-o://4d99765eab340c99fd3cbe667f13d3fe2dc034d3e4a18fb1d551c450a76b4114" gracePeriod=30 Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.121254 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.047232101 podStartE2EDuration="10.121232512s" podCreationTimestamp="2026-01-23 08:18:57 +0000 UTC" firstStartedPulling="2026-01-23 08:18:58.953483207 +0000 UTC m=+1413.433846593" lastFinishedPulling="2026-01-23 08:19:06.027483618 +0000 UTC m=+1420.507847004" observedRunningTime="2026-01-23 08:19:07.111342528 +0000 UTC m=+1421.591705914" watchObservedRunningTime="2026-01-23 08:19:07.121232512 +0000 UTC m=+1421.601596058" Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.136947 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.259374046 podStartE2EDuration="10.136922571s" podCreationTimestamp="2026-01-23 08:18:57 +0000 UTC" firstStartedPulling="2026-01-23 08:18:59.150040156 +0000 UTC m=+1413.630403542" lastFinishedPulling="2026-01-23 08:19:06.027588681 +0000 UTC m=+1420.507952067" observedRunningTime="2026-01-23 08:19:07.132506894 +0000 UTC m=+1421.612870280" watchObservedRunningTime="2026-01-23 08:19:07.136922571 +0000 UTC m=+1421.617285957" Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.159064 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.504893331 podStartE2EDuration="9.159043622s" podCreationTimestamp="2026-01-23 08:18:58 +0000 UTC" firstStartedPulling="2026-01-23 08:18:59.370315647 +0000 UTC m=+1413.850679033" lastFinishedPulling="2026-01-23 08:19:06.024465938 +0000 UTC m=+1420.504829324" observedRunningTime="2026-01-23 08:19:07.149720213 +0000 UTC m=+1421.630083599" watchObservedRunningTime="2026-01-23 08:19:07.159043622 +0000 UTC m=+1421.639407008" Jan 23 08:19:07 crc kubenswrapper[4982]: I0123 08:19:07.171370 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.370942216 podStartE2EDuration="9.171348171s" podCreationTimestamp="2026-01-23 08:18:58 +0000 UTC" firstStartedPulling="2026-01-23 08:18:59.229772485 +0000 UTC m=+1413.710135871" lastFinishedPulling="2026-01-23 08:19:06.03017844 +0000 UTC m=+1420.510541826" observedRunningTime="2026-01-23 08:19:07.162520145 +0000 UTC m=+1421.642883531" watchObservedRunningTime="2026-01-23 08:19:07.171348171 +0000 UTC m=+1421.651711557" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.119322 4982 generic.go:334] "Generic (PLEG): container finished" podID="1333c4cb-cead-4d53-8568-b7226b77a536" containerID="4d99765eab340c99fd3cbe667f13d3fe2dc034d3e4a18fb1d551c450a76b4114" exitCode=0 Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.119728 4982 generic.go:334] "Generic (PLEG): container finished" podID="1333c4cb-cead-4d53-8568-b7226b77a536" containerID="d8fafdf3e15fca5632df15734a3d3f806a09aa95390dcb713313e1dfa5473272" exitCode=143 Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.119409 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1333c4cb-cead-4d53-8568-b7226b77a536","Type":"ContainerDied","Data":"4d99765eab340c99fd3cbe667f13d3fe2dc034d3e4a18fb1d551c450a76b4114"} Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.119842 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1333c4cb-cead-4d53-8568-b7226b77a536","Type":"ContainerDied","Data":"d8fafdf3e15fca5632df15734a3d3f806a09aa95390dcb713313e1dfa5473272"} Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.232081 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.335006 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.335562 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.338561 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1333c4cb-cead-4d53-8568-b7226b77a536-logs\") pod \"1333c4cb-cead-4d53-8568-b7226b77a536\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.338692 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-combined-ca-bundle\") pod \"1333c4cb-cead-4d53-8568-b7226b77a536\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.338755 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-config-data\") pod \"1333c4cb-cead-4d53-8568-b7226b77a536\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.338961 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz6zw\" (UniqueName: \"kubernetes.io/projected/1333c4cb-cead-4d53-8568-b7226b77a536-kube-api-access-dz6zw\") pod \"1333c4cb-cead-4d53-8568-b7226b77a536\" (UID: \"1333c4cb-cead-4d53-8568-b7226b77a536\") " Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.339689 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1333c4cb-cead-4d53-8568-b7226b77a536-logs" (OuterVolumeSpecName: "logs") pod "1333c4cb-cead-4d53-8568-b7226b77a536" (UID: "1333c4cb-cead-4d53-8568-b7226b77a536"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.339926 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1333c4cb-cead-4d53-8568-b7226b77a536-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.344456 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1333c4cb-cead-4d53-8568-b7226b77a536-kube-api-access-dz6zw" (OuterVolumeSpecName: "kube-api-access-dz6zw") pod "1333c4cb-cead-4d53-8568-b7226b77a536" (UID: "1333c4cb-cead-4d53-8568-b7226b77a536"). InnerVolumeSpecName "kube-api-access-dz6zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.374378 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-config-data" (OuterVolumeSpecName: "config-data") pod "1333c4cb-cead-4d53-8568-b7226b77a536" (UID: "1333c4cb-cead-4d53-8568-b7226b77a536"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.389583 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1333c4cb-cead-4d53-8568-b7226b77a536" (UID: "1333c4cb-cead-4d53-8568-b7226b77a536"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.445032 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.445070 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1333c4cb-cead-4d53-8568-b7226b77a536-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.445083 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz6zw\" (UniqueName: \"kubernetes.io/projected/1333c4cb-cead-4d53-8568-b7226b77a536-kube-api-access-dz6zw\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.578128 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.602435 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.652574 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.652646 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.695363 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-m27gc"] Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.697787 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" podUID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerName="dnsmasq-dns" containerID="cri-o://7816d0163bf0a0ad4da5b64c9c34faf8fd633daf3ed9dbdf0208d63d147f6090" gracePeriod=10 Jan 23 08:19:08 crc kubenswrapper[4982]: I0123 08:19:08.726354 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.145083 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1333c4cb-cead-4d53-8568-b7226b77a536","Type":"ContainerDied","Data":"cf574589b85423f5c9fa322ae6c8caf5fcd423ec2bb6e0706c3420dcc042093f"} Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.145319 4982 scope.go:117] "RemoveContainer" containerID="4d99765eab340c99fd3cbe667f13d3fe2dc034d3e4a18fb1d551c450a76b4114" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.145466 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.169501 4982 generic.go:334] "Generic (PLEG): container finished" podID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerID="7816d0163bf0a0ad4da5b64c9c34faf8fd633daf3ed9dbdf0208d63d147f6090" exitCode=0 Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.169576 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" event={"ID":"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9","Type":"ContainerDied","Data":"7816d0163bf0a0ad4da5b64c9c34faf8fd633daf3ed9dbdf0208d63d147f6090"} Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.232035 4982 scope.go:117] "RemoveContainer" containerID="d8fafdf3e15fca5632df15734a3d3f806a09aa95390dcb713313e1dfa5473272" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.245129 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.269797 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.283202 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.305950 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:09 crc kubenswrapper[4982]: E0123 08:19:09.306687 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-log" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.306715 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-log" Jan 23 08:19:09 crc kubenswrapper[4982]: E0123 08:19:09.306754 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-metadata" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.306763 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-metadata" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.307082 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-log" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.307107 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" containerName="nova-metadata-metadata" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.308830 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.311731 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.311954 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.338763 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.420950 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.421112 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.472474 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eabbaae5-c370-4995-aacb-aafd7142fc0b-logs\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.472665 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5d8j\" (UniqueName: \"kubernetes.io/projected/eabbaae5-c370-4995-aacb-aafd7142fc0b-kube-api-access-h5d8j\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.472713 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-config-data\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.472761 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.472794 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.477949 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-svc\") pod \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574343 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-sb\") pod \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574382 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-config\") pod \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574474 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-nb\") pod \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574505 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbfbn\" (UniqueName: \"kubernetes.io/projected/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-kube-api-access-tbfbn\") pod \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574571 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-swift-storage-0\") pod \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\" (UID: \"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9\") " Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574837 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eabbaae5-c370-4995-aacb-aafd7142fc0b-logs\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574965 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5d8j\" (UniqueName: \"kubernetes.io/projected/eabbaae5-c370-4995-aacb-aafd7142fc0b-kube-api-access-h5d8j\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.574998 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-config-data\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.575050 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.575076 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.581392 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-kube-api-access-tbfbn" (OuterVolumeSpecName: "kube-api-access-tbfbn") pod "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" (UID: "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9"). InnerVolumeSpecName "kube-api-access-tbfbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.592286 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eabbaae5-c370-4995-aacb-aafd7142fc0b-logs\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.592951 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.593155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-config-data\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.598746 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5d8j\" (UniqueName: \"kubernetes.io/projected/eabbaae5-c370-4995-aacb-aafd7142fc0b-kube-api-access-h5d8j\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.599484 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.658395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" (UID: "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.664468 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-config" (OuterVolumeSpecName: "config") pod "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" (UID: "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.668149 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" (UID: "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.673147 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" (UID: "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.677953 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.677994 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.678005 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.678015 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbfbn\" (UniqueName: \"kubernetes.io/projected/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-kube-api-access-tbfbn\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.678027 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.728752 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" (UID: "792dd2a0-6d18-440d-becb-f3a3f1ff7fc9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.772569 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.780452 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:09 crc kubenswrapper[4982]: I0123 08:19:09.841188 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1333c4cb-cead-4d53-8568-b7226b77a536" path="/var/lib/kubelet/pods/1333c4cb-cead-4d53-8568-b7226b77a536/volumes" Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.191801 4982 generic.go:334] "Generic (PLEG): container finished" podID="85a6abf4-6ced-43b8-9b90-f525d6ed23a7" containerID="71edc0177ff0d5933505b059608613209efb4dbc155b1725e15071efbadbedeb" exitCode=0 Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.191883 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ppl7s" event={"ID":"85a6abf4-6ced-43b8-9b90-f525d6ed23a7","Type":"ContainerDied","Data":"71edc0177ff0d5933505b059608613209efb4dbc155b1725e15071efbadbedeb"} Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.196055 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" event={"ID":"792dd2a0-6d18-440d-becb-f3a3f1ff7fc9","Type":"ContainerDied","Data":"d723ab3c106ddd0fa24e1ad6801766000e485b5409ee10e0e7b014c5535129c3"} Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.196157 4982 scope.go:117] "RemoveContainer" containerID="7816d0163bf0a0ad4da5b64c9c34faf8fd633daf3ed9dbdf0208d63d147f6090" Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.196090 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-m27gc" Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.238437 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-m27gc"] Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.239585 4982 scope.go:117] "RemoveContainer" containerID="6d6c20a93d976a8163952bb3139ca0124fdfa3d7846d67b2af3e18e272c20542" Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.250545 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-m27gc"] Jan 23 08:19:10 crc kubenswrapper[4982]: W0123 08:19:10.304855 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeabbaae5_c370_4995_aacb_aafd7142fc0b.slice/crio-7d1854baf4883df97d6749aaa80b15f6ce46cbbfe897d0e5bb21145998355fa6 WatchSource:0}: Error finding container 7d1854baf4883df97d6749aaa80b15f6ce46cbbfe897d0e5bb21145998355fa6: Status 404 returned error can't find the container with id 7d1854baf4883df97d6749aaa80b15f6ce46cbbfe897d0e5bb21145998355fa6 Jan 23 08:19:10 crc kubenswrapper[4982]: I0123 08:19:10.326134 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.210542 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eabbaae5-c370-4995-aacb-aafd7142fc0b","Type":"ContainerStarted","Data":"7d1854baf4883df97d6749aaa80b15f6ce46cbbfe897d0e5bb21145998355fa6"} Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.435551 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.435837 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.435885 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.436701 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ff1d104ed341faafc779f69e2fa548cf764b2dac0f08d4eb5491e25e74751a0"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.436748 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://4ff1d104ed341faafc779f69e2fa548cf764b2dac0f08d4eb5491e25e74751a0" gracePeriod=600 Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.698216 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.832030 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj2gn\" (UniqueName: \"kubernetes.io/projected/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-kube-api-access-cj2gn\") pod \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.832095 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-scripts\") pod \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.832183 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-combined-ca-bundle\") pod \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.832391 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-config-data\") pod \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\" (UID: \"85a6abf4-6ced-43b8-9b90-f525d6ed23a7\") " Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.837596 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-scripts" (OuterVolumeSpecName: "scripts") pod "85a6abf4-6ced-43b8-9b90-f525d6ed23a7" (UID: "85a6abf4-6ced-43b8-9b90-f525d6ed23a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.838181 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-kube-api-access-cj2gn" (OuterVolumeSpecName: "kube-api-access-cj2gn") pod "85a6abf4-6ced-43b8-9b90-f525d6ed23a7" (UID: "85a6abf4-6ced-43b8-9b90-f525d6ed23a7"). InnerVolumeSpecName "kube-api-access-cj2gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.849684 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" path="/var/lib/kubelet/pods/792dd2a0-6d18-440d-becb-f3a3f1ff7fc9/volumes" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.865690 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85a6abf4-6ced-43b8-9b90-f525d6ed23a7" (UID: "85a6abf4-6ced-43b8-9b90-f525d6ed23a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.879880 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-config-data" (OuterVolumeSpecName: "config-data") pod "85a6abf4-6ced-43b8-9b90-f525d6ed23a7" (UID: "85a6abf4-6ced-43b8-9b90-f525d6ed23a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.935646 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.935683 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj2gn\" (UniqueName: \"kubernetes.io/projected/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-kube-api-access-cj2gn\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.935693 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:11 crc kubenswrapper[4982]: I0123 08:19:11.935703 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85a6abf4-6ced-43b8-9b90-f525d6ed23a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.226249 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eabbaae5-c370-4995-aacb-aafd7142fc0b","Type":"ContainerStarted","Data":"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6"} Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.228825 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eabbaae5-c370-4995-aacb-aafd7142fc0b","Type":"ContainerStarted","Data":"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef"} Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.230202 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="4ff1d104ed341faafc779f69e2fa548cf764b2dac0f08d4eb5491e25e74751a0" exitCode=0 Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.230256 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"4ff1d104ed341faafc779f69e2fa548cf764b2dac0f08d4eb5491e25e74751a0"} Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.230273 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107"} Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.230290 4982 scope.go:117] "RemoveContainer" containerID="c77a76c13f380fffd54942065cc632450e7687109faf0320bb3cc074932836d4" Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.233578 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ppl7s" event={"ID":"85a6abf4-6ced-43b8-9b90-f525d6ed23a7","Type":"ContainerDied","Data":"e727e8ab67bf5dfef056650b3e020067fd62e765f78706351cae124eb354524a"} Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.233615 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e727e8ab67bf5dfef056650b3e020067fd62e765f78706351cae124eb354524a" Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.233693 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ppl7s" Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.279526 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.279506782 podStartE2EDuration="3.279506782s" podCreationTimestamp="2026-01-23 08:19:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:12.24645658 +0000 UTC m=+1426.726819966" watchObservedRunningTime="2026-01-23 08:19:12.279506782 +0000 UTC m=+1426.759870168" Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.395091 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.395378 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-log" containerID="cri-o://e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf" gracePeriod=30 Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.395913 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-api" containerID="cri-o://990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d" gracePeriod=30 Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.426596 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.429970 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f484f641-834b-488f-a4fa-543ae864902d" containerName="nova-scheduler-scheduler" containerID="cri-o://c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" gracePeriod=30 Jan 23 08:19:12 crc kubenswrapper[4982]: I0123 08:19:12.453326 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:13 crc kubenswrapper[4982]: I0123 08:19:13.107335 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 08:19:13 crc kubenswrapper[4982]: I0123 08:19:13.255396 4982 generic.go:334] "Generic (PLEG): container finished" podID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerID="e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf" exitCode=143 Jan 23 08:19:13 crc kubenswrapper[4982]: I0123 08:19:13.257318 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69f131d5-490a-4d53-8fa4-8c0d8801c098","Type":"ContainerDied","Data":"e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf"} Jan 23 08:19:13 crc kubenswrapper[4982]: E0123 08:19:13.654642 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:19:13 crc kubenswrapper[4982]: E0123 08:19:13.656984 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:19:13 crc kubenswrapper[4982]: E0123 08:19:13.658428 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:19:13 crc kubenswrapper[4982]: E0123 08:19:13.658492 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f484f641-834b-488f-a4fa-543ae864902d" containerName="nova-scheduler-scheduler" Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.266091 4982 generic.go:334] "Generic (PLEG): container finished" podID="a5ab1abc-5f0e-4139-8d32-06470f033cb0" containerID="6a770ba4faff5c9a5f7042939adde83760f92b60f02a6a606ff4ba387c94c058" exitCode=0 Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.266473 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-52hsl" event={"ID":"a5ab1abc-5f0e-4139-8d32-06470f033cb0","Type":"ContainerDied","Data":"6a770ba4faff5c9a5f7042939adde83760f92b60f02a6a606ff4ba387c94c058"} Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.266879 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-log" containerID="cri-o://ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef" gracePeriod=30 Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.266892 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-metadata" containerID="cri-o://83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6" gracePeriod=30 Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.773277 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.773590 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.885847 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:14 crc kubenswrapper[4982]: I0123 08:19:14.981611 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.014344 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eabbaae5-c370-4995-aacb-aafd7142fc0b-logs\") pod \"eabbaae5-c370-4995-aacb-aafd7142fc0b\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.014425 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-combined-ca-bundle\") pod \"eabbaae5-c370-4995-aacb-aafd7142fc0b\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.014458 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5d8j\" (UniqueName: \"kubernetes.io/projected/eabbaae5-c370-4995-aacb-aafd7142fc0b-kube-api-access-h5d8j\") pod \"eabbaae5-c370-4995-aacb-aafd7142fc0b\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.014721 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-config-data\") pod \"eabbaae5-c370-4995-aacb-aafd7142fc0b\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.014839 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-nova-metadata-tls-certs\") pod \"eabbaae5-c370-4995-aacb-aafd7142fc0b\" (UID: \"eabbaae5-c370-4995-aacb-aafd7142fc0b\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.017956 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eabbaae5-c370-4995-aacb-aafd7142fc0b-logs" (OuterVolumeSpecName: "logs") pod "eabbaae5-c370-4995-aacb-aafd7142fc0b" (UID: "eabbaae5-c370-4995-aacb-aafd7142fc0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.022419 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabbaae5-c370-4995-aacb-aafd7142fc0b-kube-api-access-h5d8j" (OuterVolumeSpecName: "kube-api-access-h5d8j") pod "eabbaae5-c370-4995-aacb-aafd7142fc0b" (UID: "eabbaae5-c370-4995-aacb-aafd7142fc0b"). InnerVolumeSpecName "kube-api-access-h5d8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.051335 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-config-data" (OuterVolumeSpecName: "config-data") pod "eabbaae5-c370-4995-aacb-aafd7142fc0b" (UID: "eabbaae5-c370-4995-aacb-aafd7142fc0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.054679 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eabbaae5-c370-4995-aacb-aafd7142fc0b" (UID: "eabbaae5-c370-4995-aacb-aafd7142fc0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.080104 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "eabbaae5-c370-4995-aacb-aafd7142fc0b" (UID: "eabbaae5-c370-4995-aacb-aafd7142fc0b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.116987 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq2zl\" (UniqueName: \"kubernetes.io/projected/f484f641-834b-488f-a4fa-543ae864902d-kube-api-access-mq2zl\") pod \"f484f641-834b-488f-a4fa-543ae864902d\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.117174 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-config-data\") pod \"f484f641-834b-488f-a4fa-543ae864902d\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.117201 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-combined-ca-bundle\") pod \"f484f641-834b-488f-a4fa-543ae864902d\" (UID: \"f484f641-834b-488f-a4fa-543ae864902d\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.117895 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eabbaae5-c370-4995-aacb-aafd7142fc0b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.117956 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.117974 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5d8j\" (UniqueName: \"kubernetes.io/projected/eabbaae5-c370-4995-aacb-aafd7142fc0b-kube-api-access-h5d8j\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.117986 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.117997 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eabbaae5-c370-4995-aacb-aafd7142fc0b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.120384 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f484f641-834b-488f-a4fa-543ae864902d-kube-api-access-mq2zl" (OuterVolumeSpecName: "kube-api-access-mq2zl") pod "f484f641-834b-488f-a4fa-543ae864902d" (UID: "f484f641-834b-488f-a4fa-543ae864902d"). InnerVolumeSpecName "kube-api-access-mq2zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.151310 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-config-data" (OuterVolumeSpecName: "config-data") pod "f484f641-834b-488f-a4fa-543ae864902d" (UID: "f484f641-834b-488f-a4fa-543ae864902d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.151868 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f484f641-834b-488f-a4fa-543ae864902d" (UID: "f484f641-834b-488f-a4fa-543ae864902d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.220069 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq2zl\" (UniqueName: \"kubernetes.io/projected/f484f641-834b-488f-a4fa-543ae864902d-kube-api-access-mq2zl\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.220104 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.220114 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f484f641-834b-488f-a4fa-543ae864902d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.279026 4982 generic.go:334] "Generic (PLEG): container finished" podID="f484f641-834b-488f-a4fa-543ae864902d" containerID="c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" exitCode=0 Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.279103 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f484f641-834b-488f-a4fa-543ae864902d","Type":"ContainerDied","Data":"c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512"} Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.279137 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f484f641-834b-488f-a4fa-543ae864902d","Type":"ContainerDied","Data":"2c31ccb79b9ff10c29a9f5ac8b2e1edd890a752c05de5e3a40ad22a906fa4c46"} Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.279158 4982 scope.go:117] "RemoveContainer" containerID="c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.279318 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.283386 4982 generic.go:334] "Generic (PLEG): container finished" podID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerID="83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6" exitCode=0 Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.283458 4982 generic.go:334] "Generic (PLEG): container finished" podID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerID="ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef" exitCode=143 Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.283463 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.283476 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eabbaae5-c370-4995-aacb-aafd7142fc0b","Type":"ContainerDied","Data":"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6"} Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.283510 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eabbaae5-c370-4995-aacb-aafd7142fc0b","Type":"ContainerDied","Data":"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef"} Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.283532 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eabbaae5-c370-4995-aacb-aafd7142fc0b","Type":"ContainerDied","Data":"7d1854baf4883df97d6749aaa80b15f6ce46cbbfe897d0e5bb21145998355fa6"} Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.315377 4982 scope.go:117] "RemoveContainer" containerID="c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.316308 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512\": container with ID starting with c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512 not found: ID does not exist" containerID="c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.316354 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512"} err="failed to get container status \"c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512\": rpc error: code = NotFound desc = could not find container \"c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512\": container with ID starting with c745b88b302739a83e6c1db021d48ee6b74d9a5a5a0f2cd5510785847340a512 not found: ID does not exist" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.316381 4982 scope.go:117] "RemoveContainer" containerID="83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.341979 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.371301 4982 scope.go:117] "RemoveContainer" containerID="ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.394096 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.409391 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.424721 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.431363 4982 scope.go:117] "RemoveContainer" containerID="83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.434754 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6\": container with ID starting with 83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6 not found: ID does not exist" containerID="83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.434799 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6"} err="failed to get container status \"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6\": rpc error: code = NotFound desc = could not find container \"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6\": container with ID starting with 83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6 not found: ID does not exist" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.434826 4982 scope.go:117] "RemoveContainer" containerID="ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.434916 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.435418 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f484f641-834b-488f-a4fa-543ae864902d" containerName="nova-scheduler-scheduler" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435429 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f484f641-834b-488f-a4fa-543ae864902d" containerName="nova-scheduler-scheduler" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.435453 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerName="init" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435460 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerName="init" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.435475 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-metadata" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435481 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-metadata" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.435495 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-log" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435501 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-log" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.435521 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a6abf4-6ced-43b8-9b90-f525d6ed23a7" containerName="nova-manage" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435527 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a6abf4-6ced-43b8-9b90-f525d6ed23a7" containerName="nova-manage" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.435539 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerName="dnsmasq-dns" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435546 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerName="dnsmasq-dns" Jan 23 08:19:15 crc kubenswrapper[4982]: E0123 08:19:15.435720 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef\": container with ID starting with ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef not found: ID does not exist" containerID="ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435764 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="792dd2a0-6d18-440d-becb-f3a3f1ff7fc9" containerName="dnsmasq-dns" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435760 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef"} err="failed to get container status \"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef\": rpc error: code = NotFound desc = could not find container \"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef\": container with ID starting with ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef not found: ID does not exist" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435786 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f484f641-834b-488f-a4fa-543ae864902d" containerName="nova-scheduler-scheduler" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435789 4982 scope.go:117] "RemoveContainer" containerID="83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435800 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a6abf4-6ced-43b8-9b90-f525d6ed23a7" containerName="nova-manage" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435809 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-metadata" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.435819 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" containerName="nova-metadata-log" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.436206 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6"} err="failed to get container status \"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6\": rpc error: code = NotFound desc = could not find container \"83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6\": container with ID starting with 83060d447d84ba11221c96d52dfcdc5bca574210f0f9f36887b89447dcb02aa6 not found: ID does not exist" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.436221 4982 scope.go:117] "RemoveContainer" containerID="ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.436497 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.437241 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef"} err="failed to get container status \"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef\": rpc error: code = NotFound desc = could not find container \"ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef\": container with ID starting with ceb96fd2e13e7194b3e5b391fd926fb047906ca059098d2bbc11f2cf48afe8ef not found: ID does not exist" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.441967 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.444568 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.453692 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.455524 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.459217 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.459482 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.466380 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.528583 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-config-data\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.528662 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrj47\" (UniqueName: \"kubernetes.io/projected/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-kube-api-access-lrj47\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.528829 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.636908 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.637004 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.637051 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-config-data\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.637075 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.637108 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrj47\" (UniqueName: \"kubernetes.io/projected/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-kube-api-access-lrj47\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.637131 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28q5k\" (UniqueName: \"kubernetes.io/projected/c418e171-9f17-48e4-9269-e9b315b2a2dd-kube-api-access-28q5k\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.637178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-config-data\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.637232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c418e171-9f17-48e4-9269-e9b315b2a2dd-logs\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.653871 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.656747 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-config-data\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.695342 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrj47\" (UniqueName: \"kubernetes.io/projected/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-kube-api-access-lrj47\") pod \"nova-scheduler-0\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.740952 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28q5k\" (UniqueName: \"kubernetes.io/projected/c418e171-9f17-48e4-9269-e9b315b2a2dd-kube-api-access-28q5k\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.741041 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-config-data\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.741105 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c418e171-9f17-48e4-9269-e9b315b2a2dd-logs\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.741211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.741285 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.745047 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c418e171-9f17-48e4-9269-e9b315b2a2dd-logs\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.767506 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-config-data\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.769240 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.771069 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.789297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.801785 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28q5k\" (UniqueName: \"kubernetes.io/projected/c418e171-9f17-48e4-9269-e9b315b2a2dd-kube-api-access-28q5k\") pod \"nova-metadata-0\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " pod="openstack/nova-metadata-0" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.863362 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.864113 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eabbaae5-c370-4995-aacb-aafd7142fc0b" path="/var/lib/kubelet/pods/eabbaae5-c370-4995-aacb-aafd7142fc0b/volumes" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.864992 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f484f641-834b-488f-a4fa-543ae864902d" path="/var/lib/kubelet/pods/f484f641-834b-488f-a4fa-543ae864902d/volumes" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.948982 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-scripts\") pod \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.949290 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-combined-ca-bundle\") pod \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.949352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-config-data\") pod \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.949397 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5sdt\" (UniqueName: \"kubernetes.io/projected/a5ab1abc-5f0e-4139-8d32-06470f033cb0-kube-api-access-f5sdt\") pod \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\" (UID: \"a5ab1abc-5f0e-4139-8d32-06470f033cb0\") " Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.958736 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-scripts" (OuterVolumeSpecName: "scripts") pod "a5ab1abc-5f0e-4139-8d32-06470f033cb0" (UID: "a5ab1abc-5f0e-4139-8d32-06470f033cb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:15 crc kubenswrapper[4982]: I0123 08:19:15.964836 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ab1abc-5f0e-4139-8d32-06470f033cb0-kube-api-access-f5sdt" (OuterVolumeSpecName: "kube-api-access-f5sdt") pod "a5ab1abc-5f0e-4139-8d32-06470f033cb0" (UID: "a5ab1abc-5f0e-4139-8d32-06470f033cb0"). InnerVolumeSpecName "kube-api-access-f5sdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.000229 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-config-data" (OuterVolumeSpecName: "config-data") pod "a5ab1abc-5f0e-4139-8d32-06470f033cb0" (UID: "a5ab1abc-5f0e-4139-8d32-06470f033cb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.024258 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5ab1abc-5f0e-4139-8d32-06470f033cb0" (UID: "a5ab1abc-5f0e-4139-8d32-06470f033cb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.056356 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5sdt\" (UniqueName: \"kubernetes.io/projected/a5ab1abc-5f0e-4139-8d32-06470f033cb0-kube-api-access-f5sdt\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.056795 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.056885 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.056974 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ab1abc-5f0e-4139-8d32-06470f033cb0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.096976 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.252601 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.295850 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-52hsl" event={"ID":"a5ab1abc-5f0e-4139-8d32-06470f033cb0","Type":"ContainerDied","Data":"a0a278ad4f37080739de8fb833e186ba676e7329e70a643b9f6c6e0f7758489d"} Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.295884 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0a278ad4f37080739de8fb833e186ba676e7329e70a643b9f6c6e0f7758489d" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.295898 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-52hsl" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.299909 4982 generic.go:334] "Generic (PLEG): container finished" podID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerID="990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d" exitCode=0 Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.299978 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.299990 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69f131d5-490a-4d53-8fa4-8c0d8801c098","Type":"ContainerDied","Data":"990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d"} Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.300025 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69f131d5-490a-4d53-8fa4-8c0d8801c098","Type":"ContainerDied","Data":"87399efcbad6d4c2c4b696186fd13ce041a984721cce3bca75fa275ce0569d1e"} Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.300049 4982 scope.go:117] "RemoveContainer" containerID="990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.329130 4982 scope.go:117] "RemoveContainer" containerID="e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.363139 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79v6\" (UniqueName: \"kubernetes.io/projected/69f131d5-490a-4d53-8fa4-8c0d8801c098-kube-api-access-s79v6\") pod \"69f131d5-490a-4d53-8fa4-8c0d8801c098\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.363211 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-combined-ca-bundle\") pod \"69f131d5-490a-4d53-8fa4-8c0d8801c098\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.363362 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69f131d5-490a-4d53-8fa4-8c0d8801c098-logs\") pod \"69f131d5-490a-4d53-8fa4-8c0d8801c098\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.363426 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-config-data\") pod \"69f131d5-490a-4d53-8fa4-8c0d8801c098\" (UID: \"69f131d5-490a-4d53-8fa4-8c0d8801c098\") " Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.364804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69f131d5-490a-4d53-8fa4-8c0d8801c098-logs" (OuterVolumeSpecName: "logs") pod "69f131d5-490a-4d53-8fa4-8c0d8801c098" (UID: "69f131d5-490a-4d53-8fa4-8c0d8801c098"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.372395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f131d5-490a-4d53-8fa4-8c0d8801c098-kube-api-access-s79v6" (OuterVolumeSpecName: "kube-api-access-s79v6") pod "69f131d5-490a-4d53-8fa4-8c0d8801c098" (UID: "69f131d5-490a-4d53-8fa4-8c0d8801c098"). InnerVolumeSpecName "kube-api-access-s79v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.377530 4982 scope.go:117] "RemoveContainer" containerID="990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d" Jan 23 08:19:16 crc kubenswrapper[4982]: E0123 08:19:16.380120 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d\": container with ID starting with 990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d not found: ID does not exist" containerID="990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.380158 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d"} err="failed to get container status \"990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d\": rpc error: code = NotFound desc = could not find container \"990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d\": container with ID starting with 990e273244515ff0e74aa33c0a4c7f336089ccfe0b2d5378eec62c6d0704559d not found: ID does not exist" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.380183 4982 scope.go:117] "RemoveContainer" containerID="e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.397171 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: E0123 08:19:16.398040 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-api" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.398059 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-api" Jan 23 08:19:16 crc kubenswrapper[4982]: E0123 08:19:16.398073 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ab1abc-5f0e-4139-8d32-06470f033cb0" containerName="nova-cell1-conductor-db-sync" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.398080 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ab1abc-5f0e-4139-8d32-06470f033cb0" containerName="nova-cell1-conductor-db-sync" Jan 23 08:19:16 crc kubenswrapper[4982]: E0123 08:19:16.398092 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-log" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.398098 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-log" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.398354 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ab1abc-5f0e-4139-8d32-06470f033cb0" containerName="nova-cell1-conductor-db-sync" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.398368 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-log" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.398380 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" containerName="nova-api-api" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.399186 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.401552 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 08:19:16 crc kubenswrapper[4982]: E0123 08:19:16.404731 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf\": container with ID starting with e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf not found: ID does not exist" containerID="e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.404784 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf"} err="failed to get container status \"e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf\": rpc error: code = NotFound desc = could not find container \"e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf\": container with ID starting with e0a236c410d6d4ebf843e0fb3dd81fad7bc6bde51a98b8bf398087a42aa04fcf not found: ID does not exist" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.422225 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.435831 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.439002 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-config-data" (OuterVolumeSpecName: "config-data") pod "69f131d5-490a-4d53-8fa4-8c0d8801c098" (UID: "69f131d5-490a-4d53-8fa4-8c0d8801c098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.459190 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69f131d5-490a-4d53-8fa4-8c0d8801c098" (UID: "69f131d5-490a-4d53-8fa4-8c0d8801c098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.466758 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.466792 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s79v6\" (UniqueName: \"kubernetes.io/projected/69f131d5-490a-4d53-8fa4-8c0d8801c098-kube-api-access-s79v6\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.466806 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f131d5-490a-4d53-8fa4-8c0d8801c098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.466819 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69f131d5-490a-4d53-8fa4-8c0d8801c098-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.568067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.568128 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.568251 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2fgw\" (UniqueName: \"kubernetes.io/projected/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-kube-api-access-w2fgw\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.656692 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.669992 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.670111 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.670221 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2fgw\" (UniqueName: \"kubernetes.io/projected/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-kube-api-access-w2fgw\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.676392 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.676591 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.692243 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2fgw\" (UniqueName: \"kubernetes.io/projected/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-kube-api-access-w2fgw\") pod \"nova-cell1-conductor-0\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.695268 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.705814 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.718700 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.720845 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.732320 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.757794 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.873658 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnkxg\" (UniqueName: \"kubernetes.io/projected/36962664-0eb8-4871-8cf2-953c55dae470-kube-api-access-nnkxg\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.874259 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.874431 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-config-data\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.874611 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36962664-0eb8-4871-8cf2-953c55dae470-logs\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.976812 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnkxg\" (UniqueName: \"kubernetes.io/projected/36962664-0eb8-4871-8cf2-953c55dae470-kube-api-access-nnkxg\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.977044 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.977187 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-config-data\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.977303 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36962664-0eb8-4871-8cf2-953c55dae470-logs\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.977572 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.978065 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36962664-0eb8-4871-8cf2-953c55dae470-logs\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.982431 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:16 crc kubenswrapper[4982]: I0123 08:19:16.985923 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-config-data\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.005684 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnkxg\" (UniqueName: \"kubernetes.io/projected/36962664-0eb8-4871-8cf2-953c55dae470-kube-api-access-nnkxg\") pod \"nova-api-0\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " pod="openstack/nova-api-0" Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.056886 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.323675 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98123d02-78b0-43af-af4f-4dc0ed1e8f9f","Type":"ContainerStarted","Data":"3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6"} Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.324032 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98123d02-78b0-43af-af4f-4dc0ed1e8f9f","Type":"ContainerStarted","Data":"cfaa097cd5baa542ae3f860d85cc707e4088b9b578e434950f7d1473744a950d"} Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.334050 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c418e171-9f17-48e4-9269-e9b315b2a2dd","Type":"ContainerStarted","Data":"0abee9de9e3f7bd932d52b3b2531ecd4e5324a8f478b7760751bbaa3349e584e"} Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.334102 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c418e171-9f17-48e4-9269-e9b315b2a2dd","Type":"ContainerStarted","Data":"eecd25b5c6222983918d6c166a576ca782cb31bc1e7f7b06d3fb3e0bdf72c3f1"} Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.334117 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c418e171-9f17-48e4-9269-e9b315b2a2dd","Type":"ContainerStarted","Data":"5a56fda07757e6da2902f9c4ef53c75b2798a52653df86a40dd17de1b0d9d945"} Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.342833 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.34281303 podStartE2EDuration="2.34281303s" podCreationTimestamp="2026-01-23 08:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:17.339228474 +0000 UTC m=+1431.819591880" watchObservedRunningTime="2026-01-23 08:19:17.34281303 +0000 UTC m=+1431.823176416" Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.364864 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.364845428 podStartE2EDuration="2.364845428s" podCreationTimestamp="2026-01-23 08:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:17.359406723 +0000 UTC m=+1431.839770109" watchObservedRunningTime="2026-01-23 08:19:17.364845428 +0000 UTC m=+1431.845208814" Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.466169 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.617889 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.670385 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.670680 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c04ec7cf-944b-4279-8426-3485a93d6bae" containerName="kube-state-metrics" containerID="cri-o://92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98" gracePeriod=30 Jan 23 08:19:17 crc kubenswrapper[4982]: I0123 08:19:17.874964 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f131d5-490a-4d53-8fa4-8c0d8801c098" path="/var/lib/kubelet/pods/69f131d5-490a-4d53-8fa4-8c0d8801c098/volumes" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.207637 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.315352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgbn2\" (UniqueName: \"kubernetes.io/projected/c04ec7cf-944b-4279-8426-3485a93d6bae-kube-api-access-lgbn2\") pod \"c04ec7cf-944b-4279-8426-3485a93d6bae\" (UID: \"c04ec7cf-944b-4279-8426-3485a93d6bae\") " Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.337883 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04ec7cf-944b-4279-8426-3485a93d6bae-kube-api-access-lgbn2" (OuterVolumeSpecName: "kube-api-access-lgbn2") pod "c04ec7cf-944b-4279-8426-3485a93d6bae" (UID: "c04ec7cf-944b-4279-8426-3485a93d6bae"). InnerVolumeSpecName "kube-api-access-lgbn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.355029 4982 generic.go:334] "Generic (PLEG): container finished" podID="c04ec7cf-944b-4279-8426-3485a93d6bae" containerID="92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98" exitCode=2 Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.355111 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c04ec7cf-944b-4279-8426-3485a93d6bae","Type":"ContainerDied","Data":"92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98"} Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.355140 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c04ec7cf-944b-4279-8426-3485a93d6bae","Type":"ContainerDied","Data":"c509fb4ea4d7762b67cba84292de1438d0ad2c18c79e6d11e62f31dafc775c61"} Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.355162 4982 scope.go:117] "RemoveContainer" containerID="92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.355310 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.374982 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ad26b54-da5c-40d5-a715-b47a4f59b4a7","Type":"ContainerStarted","Data":"4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079"} Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.375027 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ad26b54-da5c-40d5-a715-b47a4f59b4a7","Type":"ContainerStarted","Data":"84a8c262e8ab1b27aeff793a9e743ce29b057dcc337d54ad305a7c99b3abe23d"} Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.376259 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.389122 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36962664-0eb8-4871-8cf2-953c55dae470","Type":"ContainerStarted","Data":"4c50a6e667a68747f503983f67064d13af85cb4c61f0165e5086b958cfdd57d7"} Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.389188 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36962664-0eb8-4871-8cf2-953c55dae470","Type":"ContainerStarted","Data":"f007a7dde8d4eba79e71e02826329583393a94d83a7dcec3481c58d439e297e3"} Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.389198 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36962664-0eb8-4871-8cf2-953c55dae470","Type":"ContainerStarted","Data":"50476f218d56b7c72024d078ac1318c21cfc454f473a5f57f845be48fe1aac79"} Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.410366 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.410241543 podStartE2EDuration="2.410241543s" podCreationTimestamp="2026-01-23 08:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:18.397131213 +0000 UTC m=+1432.877494599" watchObservedRunningTime="2026-01-23 08:19:18.410241543 +0000 UTC m=+1432.890604929" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.426007 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgbn2\" (UniqueName: \"kubernetes.io/projected/c04ec7cf-944b-4279-8426-3485a93d6bae-kube-api-access-lgbn2\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.439533 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.439493834 podStartE2EDuration="2.439493834s" podCreationTimestamp="2026-01-23 08:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:18.415309608 +0000 UTC m=+1432.895672994" watchObservedRunningTime="2026-01-23 08:19:18.439493834 +0000 UTC m=+1432.919857220" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.463770 4982 scope.go:117] "RemoveContainer" containerID="92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98" Jan 23 08:19:18 crc kubenswrapper[4982]: E0123 08:19:18.464332 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98\": container with ID starting with 92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98 not found: ID does not exist" containerID="92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.464376 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98"} err="failed to get container status \"92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98\": rpc error: code = NotFound desc = could not find container \"92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98\": container with ID starting with 92bad8d6f9c11a6f437a7b6aed7c253e8b0e8149935ffcb8cd0fc165424e5f98 not found: ID does not exist" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.473155 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.489494 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.503801 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:19:18 crc kubenswrapper[4982]: E0123 08:19:18.504429 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04ec7cf-944b-4279-8426-3485a93d6bae" containerName="kube-state-metrics" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.504453 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04ec7cf-944b-4279-8426-3485a93d6bae" containerName="kube-state-metrics" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.504746 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04ec7cf-944b-4279-8426-3485a93d6bae" containerName="kube-state-metrics" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.505637 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.509694 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.510063 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.517069 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.630012 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.630113 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.630279 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.630330 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snkv8\" (UniqueName: \"kubernetes.io/projected/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-api-access-snkv8\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.732614 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.732695 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snkv8\" (UniqueName: \"kubernetes.io/projected/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-api-access-snkv8\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.732799 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.732834 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.738089 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.738115 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.744161 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.755306 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snkv8\" (UniqueName: \"kubernetes.io/projected/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-api-access-snkv8\") pod \"kube-state-metrics-0\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " pod="openstack/kube-state-metrics-0" Jan 23 08:19:18 crc kubenswrapper[4982]: I0123 08:19:18.841398 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:19:19 crc kubenswrapper[4982]: I0123 08:19:19.382438 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:19:19 crc kubenswrapper[4982]: W0123 08:19:19.387000 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod938a15bf_06b9_40ed_89ac_ce3da6052837.slice/crio-e25640ae73a06f238acf092a470aaeb3829322c0343810452804dc175418111a WatchSource:0}: Error finding container e25640ae73a06f238acf092a470aaeb3829322c0343810452804dc175418111a: Status 404 returned error can't find the container with id e25640ae73a06f238acf092a470aaeb3829322c0343810452804dc175418111a Jan 23 08:19:19 crc kubenswrapper[4982]: I0123 08:19:19.710550 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:19 crc kubenswrapper[4982]: I0123 08:19:19.710912 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-central-agent" containerID="cri-o://3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2" gracePeriod=30 Jan 23 08:19:19 crc kubenswrapper[4982]: I0123 08:19:19.711073 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="proxy-httpd" containerID="cri-o://bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba" gracePeriod=30 Jan 23 08:19:19 crc kubenswrapper[4982]: I0123 08:19:19.711114 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="sg-core" containerID="cri-o://00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54" gracePeriod=30 Jan 23 08:19:19 crc kubenswrapper[4982]: I0123 08:19:19.711145 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-notification-agent" containerID="cri-o://4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51" gracePeriod=30 Jan 23 08:19:19 crc kubenswrapper[4982]: I0123 08:19:19.873964 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04ec7cf-944b-4279-8426-3485a93d6bae" path="/var/lib/kubelet/pods/c04ec7cf-944b-4279-8426-3485a93d6bae/volumes" Jan 23 08:19:20 crc kubenswrapper[4982]: I0123 08:19:20.414382 4982 generic.go:334] "Generic (PLEG): container finished" podID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerID="00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54" exitCode=2 Jan 23 08:19:20 crc kubenswrapper[4982]: I0123 08:19:20.414458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerDied","Data":"00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54"} Jan 23 08:19:20 crc kubenswrapper[4982]: I0123 08:19:20.417443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"938a15bf-06b9-40ed-89ac-ce3da6052837","Type":"ContainerStarted","Data":"954f7c6825edc3a6a04648b00c0271b4e702de967be419c5506096f1f886a150"} Jan 23 08:19:20 crc kubenswrapper[4982]: I0123 08:19:20.417509 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"938a15bf-06b9-40ed-89ac-ce3da6052837","Type":"ContainerStarted","Data":"e25640ae73a06f238acf092a470aaeb3829322c0343810452804dc175418111a"} Jan 23 08:19:20 crc kubenswrapper[4982]: I0123 08:19:20.417574 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 08:19:20 crc kubenswrapper[4982]: I0123 08:19:20.444070 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.094950951 podStartE2EDuration="2.444053943s" podCreationTimestamp="2026-01-23 08:19:18 +0000 UTC" firstStartedPulling="2026-01-23 08:19:19.389641177 +0000 UTC m=+1433.870004563" lastFinishedPulling="2026-01-23 08:19:19.738744169 +0000 UTC m=+1434.219107555" observedRunningTime="2026-01-23 08:19:20.441961197 +0000 UTC m=+1434.922324593" watchObservedRunningTime="2026-01-23 08:19:20.444053943 +0000 UTC m=+1434.924417329" Jan 23 08:19:20 crc kubenswrapper[4982]: I0123 08:19:20.771793 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 08:19:21 crc kubenswrapper[4982]: I0123 08:19:21.097365 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 08:19:21 crc kubenswrapper[4982]: I0123 08:19:21.097948 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 08:19:21 crc kubenswrapper[4982]: I0123 08:19:21.438603 4982 generic.go:334] "Generic (PLEG): container finished" podID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerID="bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba" exitCode=0 Jan 23 08:19:21 crc kubenswrapper[4982]: I0123 08:19:21.438655 4982 generic.go:334] "Generic (PLEG): container finished" podID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerID="3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2" exitCode=0 Jan 23 08:19:21 crc kubenswrapper[4982]: I0123 08:19:21.438666 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerDied","Data":"bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba"} Jan 23 08:19:21 crc kubenswrapper[4982]: I0123 08:19:21.438723 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerDied","Data":"3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2"} Jan 23 08:19:21 crc kubenswrapper[4982]: I0123 08:19:21.967164 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.127353 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-scripts\") pod \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.127527 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9mjs\" (UniqueName: \"kubernetes.io/projected/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-kube-api-access-j9mjs\") pod \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.127565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-sg-core-conf-yaml\") pod \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.127652 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-run-httpd\") pod \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.127841 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-log-httpd\") pod \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.128110 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" (UID: "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.128270 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" (UID: "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.127876 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-combined-ca-bundle\") pod \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.128339 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-config-data\") pod \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\" (UID: \"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe\") " Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.130010 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.130045 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.135765 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-kube-api-access-j9mjs" (OuterVolumeSpecName: "kube-api-access-j9mjs") pod "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" (UID: "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe"). InnerVolumeSpecName "kube-api-access-j9mjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.137240 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-scripts" (OuterVolumeSpecName: "scripts") pod "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" (UID: "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.172164 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" (UID: "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.227944 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" (UID: "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.232597 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.232651 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.232666 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9mjs\" (UniqueName: \"kubernetes.io/projected/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-kube-api-access-j9mjs\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.232680 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.289267 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-config-data" (OuterVolumeSpecName: "config-data") pod "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" (UID: "d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.334104 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.451542 4982 generic.go:334] "Generic (PLEG): container finished" podID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerID="4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51" exitCode=0 Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.451601 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerDied","Data":"4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51"} Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.451655 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe","Type":"ContainerDied","Data":"08ce4c038d413735c361eacb56dc4cbdcb5532781eed332406512cabc8b5913c"} Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.451654 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.451733 4982 scope.go:117] "RemoveContainer" containerID="bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.495779 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.499224 4982 scope.go:117] "RemoveContainer" containerID="00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.521252 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.547649 4982 scope.go:117] "RemoveContainer" containerID="4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.551516 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.552127 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-central-agent" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552151 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-central-agent" Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.552183 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="proxy-httpd" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552192 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="proxy-httpd" Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.552221 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="sg-core" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552229 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="sg-core" Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.552246 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-notification-agent" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552255 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-notification-agent" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552502 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-central-agent" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552525 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="proxy-httpd" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552538 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="sg-core" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.552547 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" containerName="ceilometer-notification-agent" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.555038 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.559117 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.559395 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.559682 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.581911 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.629846 4982 scope.go:117] "RemoveContainer" containerID="3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.639596 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-scripts\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.639724 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.639780 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-log-httpd\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.639808 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.639920 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-run-httpd\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.640197 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdd8r\" (UniqueName: \"kubernetes.io/projected/103f161a-96db-453b-9f48-ad609df3d25e-kube-api-access-gdd8r\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.640324 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-config-data\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.640372 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.666439 4982 scope.go:117] "RemoveContainer" containerID="bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba" Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.667142 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba\": container with ID starting with bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba not found: ID does not exist" containerID="bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.667208 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba"} err="failed to get container status \"bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba\": rpc error: code = NotFound desc = could not find container \"bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba\": container with ID starting with bbf870370318f74343a073085d514994ea84104566d73cc747b8ede1efc505ba not found: ID does not exist" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.667264 4982 scope.go:117] "RemoveContainer" containerID="00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54" Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.667820 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54\": container with ID starting with 00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54 not found: ID does not exist" containerID="00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.667848 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54"} err="failed to get container status \"00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54\": rpc error: code = NotFound desc = could not find container \"00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54\": container with ID starting with 00ee6828d7dbc9ec1755896173bcd11ec241d0b21697b20890d470cec9d4dd54 not found: ID does not exist" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.667865 4982 scope.go:117] "RemoveContainer" containerID="4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51" Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.668477 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51\": container with ID starting with 4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51 not found: ID does not exist" containerID="4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.668684 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51"} err="failed to get container status \"4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51\": rpc error: code = NotFound desc = could not find container \"4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51\": container with ID starting with 4cf035a7d995e340dd5b25580c4090a785074d25f52b54dcc4bec4c438c39e51 not found: ID does not exist" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.668794 4982 scope.go:117] "RemoveContainer" containerID="3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2" Jan 23 08:19:22 crc kubenswrapper[4982]: E0123 08:19:22.669796 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2\": container with ID starting with 3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2 not found: ID does not exist" containerID="3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.669833 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2"} err="failed to get container status \"3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2\": rpc error: code = NotFound desc = could not find container \"3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2\": container with ID starting with 3712e55be20369626d96cbaa8eedac936c4bf0b19cde3b2be1ca6dd5d46845e2 not found: ID does not exist" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.742260 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-scripts\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.742641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.742756 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-log-httpd\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.742832 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.742936 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-run-httpd\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.743025 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdd8r\" (UniqueName: \"kubernetes.io/projected/103f161a-96db-453b-9f48-ad609df3d25e-kube-api-access-gdd8r\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.743113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-config-data\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.743226 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.746009 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-run-httpd\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.751173 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.751570 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-log-httpd\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.751755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.754061 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.754819 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-scripts\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.756893 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-config-data\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.763203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdd8r\" (UniqueName: \"kubernetes.io/projected/103f161a-96db-453b-9f48-ad609df3d25e-kube-api-access-gdd8r\") pod \"ceilometer-0\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " pod="openstack/ceilometer-0" Jan 23 08:19:22 crc kubenswrapper[4982]: I0123 08:19:22.906200 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:23 crc kubenswrapper[4982]: I0123 08:19:23.605119 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:23 crc kubenswrapper[4982]: I0123 08:19:23.849075 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe" path="/var/lib/kubelet/pods/d9fb77f9-b47a-46a4-8f3e-ec24ff0012fe/volumes" Jan 23 08:19:24 crc kubenswrapper[4982]: I0123 08:19:24.477142 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerStarted","Data":"7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a"} Jan 23 08:19:24 crc kubenswrapper[4982]: I0123 08:19:24.477550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerStarted","Data":"22dbc66e9bcb3de1dec68e4e5eef53b0cd61b42dd4e3bd3bb31f84642ab337d9"} Jan 23 08:19:25 crc kubenswrapper[4982]: I0123 08:19:25.772967 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 08:19:25 crc kubenswrapper[4982]: I0123 08:19:25.806081 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 08:19:26 crc kubenswrapper[4982]: I0123 08:19:26.098532 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 08:19:26 crc kubenswrapper[4982]: I0123 08:19:26.099124 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 08:19:26 crc kubenswrapper[4982]: I0123 08:19:26.508382 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerStarted","Data":"d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627"} Jan 23 08:19:26 crc kubenswrapper[4982]: I0123 08:19:26.554248 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 08:19:27 crc kubenswrapper[4982]: I0123 08:19:27.020736 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 08:19:27 crc kubenswrapper[4982]: I0123 08:19:27.057823 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:19:27 crc kubenswrapper[4982]: I0123 08:19:27.058370 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:19:27 crc kubenswrapper[4982]: I0123 08:19:27.109898 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:27 crc kubenswrapper[4982]: I0123 08:19:27.109898 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:27 crc kubenswrapper[4982]: I0123 08:19:27.526991 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerStarted","Data":"2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3"} Jan 23 08:19:28 crc kubenswrapper[4982]: I0123 08:19:28.139819 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:28 crc kubenswrapper[4982]: I0123 08:19:28.140207 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:28 crc kubenswrapper[4982]: I0123 08:19:28.864997 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 08:19:29 crc kubenswrapper[4982]: I0123 08:19:29.562179 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerStarted","Data":"31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2"} Jan 23 08:19:29 crc kubenswrapper[4982]: I0123 08:19:29.563515 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 08:19:29 crc kubenswrapper[4982]: I0123 08:19:29.589665 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.67305325 podStartE2EDuration="7.58964058s" podCreationTimestamp="2026-01-23 08:19:22 +0000 UTC" firstStartedPulling="2026-01-23 08:19:23.589804535 +0000 UTC m=+1438.070167921" lastFinishedPulling="2026-01-23 08:19:28.506391865 +0000 UTC m=+1442.986755251" observedRunningTime="2026-01-23 08:19:29.587078342 +0000 UTC m=+1444.067441728" watchObservedRunningTime="2026-01-23 08:19:29.58964058 +0000 UTC m=+1444.070003986" Jan 23 08:19:36 crc kubenswrapper[4982]: I0123 08:19:36.107061 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 08:19:36 crc kubenswrapper[4982]: I0123 08:19:36.108525 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 08:19:36 crc kubenswrapper[4982]: I0123 08:19:36.117940 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 08:19:36 crc kubenswrapper[4982]: I0123 08:19:36.666110 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.063321 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.064026 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.065440 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.065474 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.069715 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.070957 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.340203 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-7xfzb"] Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.343562 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.430377 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-7xfzb"] Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.444110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-config\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.444154 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.444174 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.444271 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtqz\" (UniqueName: \"kubernetes.io/projected/a358b847-122d-4a65-99b9-e21d6a8136aa-kube-api-access-vgtqz\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.444315 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.444349 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.546080 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtqz\" (UniqueName: \"kubernetes.io/projected/a358b847-122d-4a65-99b9-e21d6a8136aa-kube-api-access-vgtqz\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.546512 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.546574 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.546777 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-config\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.546814 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.546865 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.547973 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.548685 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-config\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.548965 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.549153 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.550249 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.567483 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtqz\" (UniqueName: \"kubernetes.io/projected/a358b847-122d-4a65-99b9-e21d6a8136aa-kube-api-access-vgtqz\") pod \"dnsmasq-dns-fcd6f8f8f-7xfzb\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.715059 4982 generic.go:334] "Generic (PLEG): container finished" podID="31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" containerID="9f44f2214b8541e68400460c130ff8534316139d205c85096653b9d99f9d7ae2" exitCode=137 Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.715702 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0","Type":"ContainerDied","Data":"9f44f2214b8541e68400460c130ff8534316139d205c85096653b9d99f9d7ae2"} Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.718687 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:37 crc kubenswrapper[4982]: I0123 08:19:37.919496 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.058764 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-combined-ca-bundle\") pod \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.058826 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-config-data\") pod \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.058873 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcvbn\" (UniqueName: \"kubernetes.io/projected/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-kube-api-access-wcvbn\") pod \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\" (UID: \"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0\") " Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.070804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-kube-api-access-wcvbn" (OuterVolumeSpecName: "kube-api-access-wcvbn") pod "31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" (UID: "31fea2ab-c75c-4825-bb9e-7d3bcdd551e0"). InnerVolumeSpecName "kube-api-access-wcvbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.109103 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-config-data" (OuterVolumeSpecName: "config-data") pod "31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" (UID: "31fea2ab-c75c-4825-bb9e-7d3bcdd551e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.156784 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" (UID: "31fea2ab-c75c-4825-bb9e-7d3bcdd551e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.162119 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.162163 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.162176 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcvbn\" (UniqueName: \"kubernetes.io/projected/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0-kube-api-access-wcvbn\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.412848 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-7xfzb"] Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.735115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" event={"ID":"a358b847-122d-4a65-99b9-e21d6a8136aa","Type":"ContainerStarted","Data":"ae74c82bfe1f5971bbb740ea6c3b7cdbd538bafd66f324db244e06a5fc7cfbbc"} Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.739333 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"31fea2ab-c75c-4825-bb9e-7d3bcdd551e0","Type":"ContainerDied","Data":"c0848f1dc9560bea99aaa4def52c87df28e5380f56dc513b11b65a9c0db8019f"} Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.739399 4982 scope.go:117] "RemoveContainer" containerID="9f44f2214b8541e68400460c130ff8534316139d205c85096653b9d99f9d7ae2" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.739695 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.799894 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.821647 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.837552 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:19:38 crc kubenswrapper[4982]: E0123 08:19:38.838477 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.838508 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.838834 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.840256 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.856672 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.858262 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.858557 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.858813 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.990991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.991105 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgx8m\" (UniqueName: \"kubernetes.io/projected/996235df-739a-4029-96df-88e45a44a38c-kube-api-access-cgx8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.991155 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.991232 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:38 crc kubenswrapper[4982]: I0123 08:19:38.991870 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.094661 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.094806 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.094868 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.094939 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgx8m\" (UniqueName: \"kubernetes.io/projected/996235df-739a-4029-96df-88e45a44a38c-kube-api-access-cgx8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.095025 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.100220 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.100358 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.101254 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.101753 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.120648 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgx8m\" (UniqueName: \"kubernetes.io/projected/996235df-739a-4029-96df-88e45a44a38c-kube-api-access-cgx8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.171652 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.760262 4982 generic.go:334] "Generic (PLEG): container finished" podID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerID="dd88ed37f3a343261a9c049903d9492c70740351a511140530ad3a09ecab4c79" exitCode=0 Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.761090 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" event={"ID":"a358b847-122d-4a65-99b9-e21d6a8136aa","Type":"ContainerDied","Data":"dd88ed37f3a343261a9c049903d9492c70740351a511140530ad3a09ecab4c79"} Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.761160 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:19:39 crc kubenswrapper[4982]: I0123 08:19:39.844776 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fea2ab-c75c-4825-bb9e-7d3bcdd551e0" path="/var/lib/kubelet/pods/31fea2ab-c75c-4825-bb9e-7d3bcdd551e0/volumes" Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.153922 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.154585 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-central-agent" containerID="cri-o://7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a" gracePeriod=30 Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.155024 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="proxy-httpd" containerID="cri-o://31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2" gracePeriod=30 Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.155644 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-notification-agent" containerID="cri-o://d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627" gracePeriod=30 Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.155705 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="sg-core" containerID="cri-o://2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3" gracePeriod=30 Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.170252 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.782442 4982 generic.go:334] "Generic (PLEG): container finished" podID="103f161a-96db-453b-9f48-ad609df3d25e" containerID="31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2" exitCode=0 Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.782487 4982 generic.go:334] "Generic (PLEG): container finished" podID="103f161a-96db-453b-9f48-ad609df3d25e" containerID="2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3" exitCode=2 Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.782496 4982 generic.go:334] "Generic (PLEG): container finished" podID="103f161a-96db-453b-9f48-ad609df3d25e" containerID="7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a" exitCode=0 Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.782560 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerDied","Data":"31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2"} Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.782587 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerDied","Data":"2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3"} Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.782599 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerDied","Data":"7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a"} Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.786581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" event={"ID":"a358b847-122d-4a65-99b9-e21d6a8136aa","Type":"ContainerStarted","Data":"39f9cc3dea990a44f6526e8d44212606fd2bd98975af302178a764d6956a1b61"} Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.786757 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.788870 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"996235df-739a-4029-96df-88e45a44a38c","Type":"ContainerStarted","Data":"0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5"} Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.788903 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"996235df-739a-4029-96df-88e45a44a38c","Type":"ContainerStarted","Data":"0bca2e40080cbc834225bad3b25f0376de390c68077a740300879fdde557bb4a"} Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.815105 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" podStartSLOduration=3.815086588 podStartE2EDuration="3.815086588s" podCreationTimestamp="2026-01-23 08:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:40.809898979 +0000 UTC m=+1455.290262375" watchObservedRunningTime="2026-01-23 08:19:40.815086588 +0000 UTC m=+1455.295449974" Jan 23 08:19:40 crc kubenswrapper[4982]: I0123 08:19:40.837742 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.837722282 podStartE2EDuration="2.837722282s" podCreationTimestamp="2026-01-23 08:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:40.829706508 +0000 UTC m=+1455.310069894" watchObservedRunningTime="2026-01-23 08:19:40.837722282 +0000 UTC m=+1455.318085668" Jan 23 08:19:41 crc kubenswrapper[4982]: I0123 08:19:41.426714 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:41 crc kubenswrapper[4982]: I0123 08:19:41.427652 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-log" containerID="cri-o://f007a7dde8d4eba79e71e02826329583393a94d83a7dcec3481c58d439e297e3" gracePeriod=30 Jan 23 08:19:41 crc kubenswrapper[4982]: I0123 08:19:41.427741 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-api" containerID="cri-o://4c50a6e667a68747f503983f67064d13af85cb4c61f0165e5086b958cfdd57d7" gracePeriod=30 Jan 23 08:19:41 crc kubenswrapper[4982]: I0123 08:19:41.803764 4982 generic.go:334] "Generic (PLEG): container finished" podID="36962664-0eb8-4871-8cf2-953c55dae470" containerID="f007a7dde8d4eba79e71e02826329583393a94d83a7dcec3481c58d439e297e3" exitCode=143 Jan 23 08:19:41 crc kubenswrapper[4982]: I0123 08:19:41.803891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36962664-0eb8-4871-8cf2-953c55dae470","Type":"ContainerDied","Data":"f007a7dde8d4eba79e71e02826329583393a94d83a7dcec3481c58d439e297e3"} Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.173715 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.730687 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.848762 4982 generic.go:334] "Generic (PLEG): container finished" podID="103f161a-96db-453b-9f48-ad609df3d25e" containerID="d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627" exitCode=0 Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.848842 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerDied","Data":"d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627"} Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.848883 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"103f161a-96db-453b-9f48-ad609df3d25e","Type":"ContainerDied","Data":"22dbc66e9bcb3de1dec68e4e5eef53b0cd61b42dd4e3bd3bb31f84642ab337d9"} Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.848905 4982 scope.go:117] "RemoveContainer" containerID="31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.849078 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.863493 4982 generic.go:334] "Generic (PLEG): container finished" podID="36962664-0eb8-4871-8cf2-953c55dae470" containerID="4c50a6e667a68747f503983f67064d13af85cb4c61f0165e5086b958cfdd57d7" exitCode=0 Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.863534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36962664-0eb8-4871-8cf2-953c55dae470","Type":"ContainerDied","Data":"4c50a6e667a68747f503983f67064d13af85cb4c61f0165e5086b958cfdd57d7"} Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.864698 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-scripts\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.864825 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-run-httpd\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.864896 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-sg-core-conf-yaml\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.864936 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-ceilometer-tls-certs\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.865244 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdd8r\" (UniqueName: \"kubernetes.io/projected/103f161a-96db-453b-9f48-ad609df3d25e-kube-api-access-gdd8r\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.865403 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-combined-ca-bundle\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.865830 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.866041 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-log-httpd\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.866075 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-config-data\") pod \"103f161a-96db-453b-9f48-ad609df3d25e\" (UID: \"103f161a-96db-453b-9f48-ad609df3d25e\") " Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.867953 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.888821 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-scripts" (OuterVolumeSpecName: "scripts") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.898350 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103f161a-96db-453b-9f48-ad609df3d25e-kube-api-access-gdd8r" (OuterVolumeSpecName: "kube-api-access-gdd8r") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "kube-api-access-gdd8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.972793 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.972829 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.972844 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdd8r\" (UniqueName: \"kubernetes.io/projected/103f161a-96db-453b-9f48-ad609df3d25e-kube-api-access-gdd8r\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:44 crc kubenswrapper[4982]: I0123 08:19:44.972857 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/103f161a-96db-453b-9f48-ad609df3d25e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.064410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.084121 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.132979 4982 scope.go:117] "RemoveContainer" containerID="2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.139296 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.175334 4982 scope.go:117] "RemoveContainer" containerID="d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.183266 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.187486 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.195888 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.218043 4982 scope.go:117] "RemoveContainer" containerID="7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.244582 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-config-data" (OuterVolumeSpecName: "config-data") pod "103f161a-96db-453b-9f48-ad609df3d25e" (UID: "103f161a-96db-453b-9f48-ad609df3d25e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.263260 4982 scope.go:117] "RemoveContainer" containerID="31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.264099 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2\": container with ID starting with 31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2 not found: ID does not exist" containerID="31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.264152 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2"} err="failed to get container status \"31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2\": rpc error: code = NotFound desc = could not find container \"31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2\": container with ID starting with 31757e1ef995ed7c686f24363bb3ee3786b22d0030d92a295a63cbf01dbde4f2 not found: ID does not exist" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.264186 4982 scope.go:117] "RemoveContainer" containerID="2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.264867 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3\": container with ID starting with 2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3 not found: ID does not exist" containerID="2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.264899 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3"} err="failed to get container status \"2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3\": rpc error: code = NotFound desc = could not find container \"2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3\": container with ID starting with 2ed5643b47f7f1291e1711a9f70c8efe530985c9695ba3320ae8ac3a245ab7c3 not found: ID does not exist" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.264922 4982 scope.go:117] "RemoveContainer" containerID="d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.265479 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627\": container with ID starting with d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627 not found: ID does not exist" containerID="d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.265507 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627"} err="failed to get container status \"d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627\": rpc error: code = NotFound desc = could not find container \"d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627\": container with ID starting with d2220e8222eb7bdec5eb27a25396ee9177032ef5a918d5c2b17e2373607a4627 not found: ID does not exist" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.265521 4982 scope.go:117] "RemoveContainer" containerID="7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.266075 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a\": container with ID starting with 7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a not found: ID does not exist" containerID="7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.266096 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a"} err="failed to get container status \"7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a\": rpc error: code = NotFound desc = could not find container \"7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a\": container with ID starting with 7e7ec66791745e6552e1a36858036d242f5af08aadbe809ed5a46e4ea403be5a not found: ID does not exist" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.290395 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-config-data\") pod \"36962664-0eb8-4871-8cf2-953c55dae470\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.290493 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36962664-0eb8-4871-8cf2-953c55dae470-logs\") pod \"36962664-0eb8-4871-8cf2-953c55dae470\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.290564 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-combined-ca-bundle\") pod \"36962664-0eb8-4871-8cf2-953c55dae470\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.290654 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnkxg\" (UniqueName: \"kubernetes.io/projected/36962664-0eb8-4871-8cf2-953c55dae470-kube-api-access-nnkxg\") pod \"36962664-0eb8-4871-8cf2-953c55dae470\" (UID: \"36962664-0eb8-4871-8cf2-953c55dae470\") " Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.291514 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.291542 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103f161a-96db-453b-9f48-ad609df3d25e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.292804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36962664-0eb8-4871-8cf2-953c55dae470-logs" (OuterVolumeSpecName: "logs") pod "36962664-0eb8-4871-8cf2-953c55dae470" (UID: "36962664-0eb8-4871-8cf2-953c55dae470"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.298659 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36962664-0eb8-4871-8cf2-953c55dae470-kube-api-access-nnkxg" (OuterVolumeSpecName: "kube-api-access-nnkxg") pod "36962664-0eb8-4871-8cf2-953c55dae470" (UID: "36962664-0eb8-4871-8cf2-953c55dae470"). InnerVolumeSpecName "kube-api-access-nnkxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.339766 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-config-data" (OuterVolumeSpecName: "config-data") pod "36962664-0eb8-4871-8cf2-953c55dae470" (UID: "36962664-0eb8-4871-8cf2-953c55dae470"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.360516 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36962664-0eb8-4871-8cf2-953c55dae470" (UID: "36962664-0eb8-4871-8cf2-953c55dae470"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.393272 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.393415 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36962664-0eb8-4871-8cf2-953c55dae470-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.393483 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36962664-0eb8-4871-8cf2-953c55dae470-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.393544 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnkxg\" (UniqueName: \"kubernetes.io/projected/36962664-0eb8-4871-8cf2-953c55dae470-kube-api-access-nnkxg\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.510811 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.536024 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.554189 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.556827 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-notification-agent" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.557001 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-notification-agent" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.557125 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="sg-core" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.557201 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="sg-core" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.557306 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-api" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.557379 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-api" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.557480 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="proxy-httpd" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.557553 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="proxy-httpd" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.557674 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-log" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.557762 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-log" Jan 23 08:19:45 crc kubenswrapper[4982]: E0123 08:19:45.557894 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-central-agent" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.557975 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-central-agent" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.559232 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-api" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.559367 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="proxy-httpd" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.559462 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="sg-core" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.559573 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-notification-agent" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.559749 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="36962664-0eb8-4871-8cf2-953c55dae470" containerName="nova-api-log" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.559845 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="103f161a-96db-453b-9f48-ad609df3d25e" containerName="ceilometer-central-agent" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.569662 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.574733 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.575299 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.582461 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.584102 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.702071 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5l5\" (UniqueName: \"kubernetes.io/projected/16e7b450-ab50-4a27-ae68-450c34eb2cba-kube-api-access-2v5l5\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.702380 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.702500 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-run-httpd\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.702591 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-scripts\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.702740 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-log-httpd\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.702849 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.702945 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.703038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-config-data\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804477 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804587 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-config-data\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804629 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v5l5\" (UniqueName: \"kubernetes.io/projected/16e7b450-ab50-4a27-ae68-450c34eb2cba-kube-api-access-2v5l5\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804689 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804716 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-run-httpd\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804741 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-scripts\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.804797 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-log-httpd\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.805241 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-run-httpd\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.805276 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-log-httpd\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.809224 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.810430 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.810922 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-config-data\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.812751 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-scripts\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.814704 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.823874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v5l5\" (UniqueName: \"kubernetes.io/projected/16e7b450-ab50-4a27-ae68-450c34eb2cba-kube-api-access-2v5l5\") pod \"ceilometer-0\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " pod="openstack/ceilometer-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.840258 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103f161a-96db-453b-9f48-ad609df3d25e" path="/var/lib/kubelet/pods/103f161a-96db-453b-9f48-ad609df3d25e/volumes" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.877438 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"36962664-0eb8-4871-8cf2-953c55dae470","Type":"ContainerDied","Data":"50476f218d56b7c72024d078ac1318c21cfc454f473a5f57f845be48fe1aac79"} Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.877496 4982 scope.go:117] "RemoveContainer" containerID="4c50a6e667a68747f503983f67064d13af85cb4c61f0165e5086b958cfdd57d7" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.877737 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:45 crc kubenswrapper[4982]: I0123 08:19:45.900230 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:45.983331 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.029218 4982 scope.go:117] "RemoveContainer" containerID="f007a7dde8d4eba79e71e02826329583393a94d83a7dcec3481c58d439e297e3" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.046000 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.055223 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.059520 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.062678 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.062887 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.063003 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.102898 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.214254 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q926t\" (UniqueName: \"kubernetes.io/projected/6107b5ed-2323-4fdf-92fd-8a45f5775c78-kube-api-access-q926t\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.214310 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.214546 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107b5ed-2323-4fdf-92fd-8a45f5775c78-logs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.214575 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-public-tls-certs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.214665 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.214700 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-config-data\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.316324 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q926t\" (UniqueName: \"kubernetes.io/projected/6107b5ed-2323-4fdf-92fd-8a45f5775c78-kube-api-access-q926t\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.316378 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.316515 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107b5ed-2323-4fdf-92fd-8a45f5775c78-logs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.316537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-public-tls-certs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.316584 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.316642 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-config-data\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.317174 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107b5ed-2323-4fdf-92fd-8a45f5775c78-logs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.323127 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.323813 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.325204 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-public-tls-certs\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.325617 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-config-data\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.339294 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q926t\" (UniqueName: \"kubernetes.io/projected/6107b5ed-2323-4fdf-92fd-8a45f5775c78-kube-api-access-q926t\") pod \"nova-api-0\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.416879 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.889466 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:19:46 crc kubenswrapper[4982]: W0123 08:19:46.896485 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16e7b450_ab50_4a27_ae68_450c34eb2cba.slice/crio-b02308e3b9f16694380f41f5b18021820fe2f4b45566f747b7cd766b3fc9afa5 WatchSource:0}: Error finding container b02308e3b9f16694380f41f5b18021820fe2f4b45566f747b7cd766b3fc9afa5: Status 404 returned error can't find the container with id b02308e3b9f16694380f41f5b18021820fe2f4b45566f747b7cd766b3fc9afa5 Jan 23 08:19:46 crc kubenswrapper[4982]: I0123 08:19:46.906711 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.720526 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.816844 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-nhbw6"] Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.817250 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" podUID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerName="dnsmasq-dns" containerID="cri-o://c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d" gracePeriod=10 Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.936236 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36962664-0eb8-4871-8cf2-953c55dae470" path="/var/lib/kubelet/pods/36962664-0eb8-4871-8cf2-953c55dae470/volumes" Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.957498 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerStarted","Data":"60990b5983a5ed6f357cfa2bc9fee6928a439ca6581002d82be773bb5dcd8535"} Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.957551 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerStarted","Data":"b02308e3b9f16694380f41f5b18021820fe2f4b45566f747b7cd766b3fc9afa5"} Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.960014 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6107b5ed-2323-4fdf-92fd-8a45f5775c78","Type":"ContainerStarted","Data":"e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db"} Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.960046 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6107b5ed-2323-4fdf-92fd-8a45f5775c78","Type":"ContainerStarted","Data":"14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab"} Jan 23 08:19:47 crc kubenswrapper[4982]: I0123 08:19:47.960057 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6107b5ed-2323-4fdf-92fd-8a45f5775c78","Type":"ContainerStarted","Data":"131c3fbabd23fcd8c18a8fa443ec1b796d3599a7d311e45a543c968fd6db3940"} Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.055430 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.055409408 podStartE2EDuration="3.055409408s" podCreationTimestamp="2026-01-23 08:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:48.016956912 +0000 UTC m=+1462.497320298" watchObservedRunningTime="2026-01-23 08:19:48.055409408 +0000 UTC m=+1462.535772784" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.596673 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.665137 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-svc\") pod \"54f8e0fa-7196-4b84-abec-cdea9bc79140\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.665181 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-config\") pod \"54f8e0fa-7196-4b84-abec-cdea9bc79140\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.665239 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-sb\") pod \"54f8e0fa-7196-4b84-abec-cdea9bc79140\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.665353 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drfn7\" (UniqueName: \"kubernetes.io/projected/54f8e0fa-7196-4b84-abec-cdea9bc79140-kube-api-access-drfn7\") pod \"54f8e0fa-7196-4b84-abec-cdea9bc79140\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.665400 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-swift-storage-0\") pod \"54f8e0fa-7196-4b84-abec-cdea9bc79140\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.665444 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-nb\") pod \"54f8e0fa-7196-4b84-abec-cdea9bc79140\" (UID: \"54f8e0fa-7196-4b84-abec-cdea9bc79140\") " Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.672224 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f8e0fa-7196-4b84-abec-cdea9bc79140-kube-api-access-drfn7" (OuterVolumeSpecName: "kube-api-access-drfn7") pod "54f8e0fa-7196-4b84-abec-cdea9bc79140" (UID: "54f8e0fa-7196-4b84-abec-cdea9bc79140"). InnerVolumeSpecName "kube-api-access-drfn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.732471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "54f8e0fa-7196-4b84-abec-cdea9bc79140" (UID: "54f8e0fa-7196-4b84-abec-cdea9bc79140"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.743279 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "54f8e0fa-7196-4b84-abec-cdea9bc79140" (UID: "54f8e0fa-7196-4b84-abec-cdea9bc79140"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.744091 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "54f8e0fa-7196-4b84-abec-cdea9bc79140" (UID: "54f8e0fa-7196-4b84-abec-cdea9bc79140"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.745083 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "54f8e0fa-7196-4b84-abec-cdea9bc79140" (UID: "54f8e0fa-7196-4b84-abec-cdea9bc79140"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.747234 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-config" (OuterVolumeSpecName: "config") pod "54f8e0fa-7196-4b84-abec-cdea9bc79140" (UID: "54f8e0fa-7196-4b84-abec-cdea9bc79140"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.768458 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.768497 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.768507 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.768520 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drfn7\" (UniqueName: \"kubernetes.io/projected/54f8e0fa-7196-4b84-abec-cdea9bc79140-kube-api-access-drfn7\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.768531 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.768539 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/54f8e0fa-7196-4b84-abec-cdea9bc79140-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.991648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerStarted","Data":"6c840aeda273bccf85f7f5295d0123e2191ab16e2c3ce7f4c6ffeaffac682224"} Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.994575 4982 generic.go:334] "Generic (PLEG): container finished" podID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerID="c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d" exitCode=0 Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.994844 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.995600 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" event={"ID":"54f8e0fa-7196-4b84-abec-cdea9bc79140","Type":"ContainerDied","Data":"c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d"} Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.995654 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-nhbw6" event={"ID":"54f8e0fa-7196-4b84-abec-cdea9bc79140","Type":"ContainerDied","Data":"4534e07402348eb76b623ba47f0fab7725784f5f2154fafaa0e827ad349d9c84"} Jan 23 08:19:48 crc kubenswrapper[4982]: I0123 08:19:48.995675 4982 scope.go:117] "RemoveContainer" containerID="c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.022157 4982 scope.go:117] "RemoveContainer" containerID="0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.053882 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-nhbw6"] Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.064938 4982 scope.go:117] "RemoveContainer" containerID="c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d" Jan 23 08:19:49 crc kubenswrapper[4982]: E0123 08:19:49.065790 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d\": container with ID starting with c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d not found: ID does not exist" containerID="c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.066006 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d"} err="failed to get container status \"c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d\": rpc error: code = NotFound desc = could not find container \"c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d\": container with ID starting with c83c752af9b556a8d599e6af0c811d34da9a43a63b2e58024c7a6ebe69afed2d not found: ID does not exist" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.066123 4982 scope.go:117] "RemoveContainer" containerID="0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d" Jan 23 08:19:49 crc kubenswrapper[4982]: E0123 08:19:49.066595 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d\": container with ID starting with 0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d not found: ID does not exist" containerID="0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.066642 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d"} err="failed to get container status \"0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d\": rpc error: code = NotFound desc = could not find container \"0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d\": container with ID starting with 0edeefb21af7234bb43979a3251d4dd412ac4584c66d26a7c765f4c7469e778d not found: ID does not exist" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.071848 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-nhbw6"] Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.173938 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.199345 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:49 crc kubenswrapper[4982]: I0123 08:19:49.846458 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f8e0fa-7196-4b84-abec-cdea9bc79140" path="/var/lib/kubelet/pods/54f8e0fa-7196-4b84-abec-cdea9bc79140/volumes" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.011936 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerStarted","Data":"bc50b6a7dac2bafbc66b790973c209bbad338f83281ec4c65bb5105ffb7b24d5"} Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.028592 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.215880 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-zmlhm"] Jan 23 08:19:50 crc kubenswrapper[4982]: E0123 08:19:50.216440 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerName="dnsmasq-dns" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.216461 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerName="dnsmasq-dns" Jan 23 08:19:50 crc kubenswrapper[4982]: E0123 08:19:50.216493 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerName="init" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.216500 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerName="init" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.216737 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f8e0fa-7196-4b84-abec-cdea9bc79140" containerName="dnsmasq-dns" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.217452 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.220524 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.221812 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.227354 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zmlhm"] Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.310529 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.310607 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjd6z\" (UniqueName: \"kubernetes.io/projected/967baf94-5c3d-4033-9156-ed4b453588fe-kube-api-access-bjd6z\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.310665 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-scripts\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.310813 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-config-data\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.413177 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.413673 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjd6z\" (UniqueName: \"kubernetes.io/projected/967baf94-5c3d-4033-9156-ed4b453588fe-kube-api-access-bjd6z\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.413718 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-scripts\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.413814 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-config-data\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.421542 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-scripts\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.423363 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-config-data\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.443996 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.468347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjd6z\" (UniqueName: \"kubernetes.io/projected/967baf94-5c3d-4033-9156-ed4b453588fe-kube-api-access-bjd6z\") pod \"nova-cell1-cell-mapping-zmlhm\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:50 crc kubenswrapper[4982]: I0123 08:19:50.544127 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:51 crc kubenswrapper[4982]: W0123 08:19:51.104527 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod967baf94_5c3d_4033_9156_ed4b453588fe.slice/crio-1de26b56309e5c756e64fee9e90578dac4eb1ee42e4be146b2010c49a21a357b WatchSource:0}: Error finding container 1de26b56309e5c756e64fee9e90578dac4eb1ee42e4be146b2010c49a21a357b: Status 404 returned error can't find the container with id 1de26b56309e5c756e64fee9e90578dac4eb1ee42e4be146b2010c49a21a357b Jan 23 08:19:51 crc kubenswrapper[4982]: I0123 08:19:51.109930 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-zmlhm"] Jan 23 08:19:52 crc kubenswrapper[4982]: I0123 08:19:52.039674 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerStarted","Data":"bc2badabfe3be71114926cfc4d259bac2c6e7c4fee0fe0967b92bbf9f88f6e08"} Jan 23 08:19:52 crc kubenswrapper[4982]: I0123 08:19:52.040133 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 08:19:52 crc kubenswrapper[4982]: I0123 08:19:52.042036 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zmlhm" event={"ID":"967baf94-5c3d-4033-9156-ed4b453588fe","Type":"ContainerStarted","Data":"929fcb8adb709d932397978e38905f34dd9ae86e421d1c518211189261ab24d5"} Jan 23 08:19:52 crc kubenswrapper[4982]: I0123 08:19:52.042078 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zmlhm" event={"ID":"967baf94-5c3d-4033-9156-ed4b453588fe","Type":"ContainerStarted","Data":"1de26b56309e5c756e64fee9e90578dac4eb1ee42e4be146b2010c49a21a357b"} Jan 23 08:19:52 crc kubenswrapper[4982]: I0123 08:19:52.074717 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.172278279 podStartE2EDuration="7.074602754s" podCreationTimestamp="2026-01-23 08:19:45 +0000 UTC" firstStartedPulling="2026-01-23 08:19:46.903018345 +0000 UTC m=+1461.383381741" lastFinishedPulling="2026-01-23 08:19:50.80534283 +0000 UTC m=+1465.285706216" observedRunningTime="2026-01-23 08:19:52.061783642 +0000 UTC m=+1466.542147048" watchObservedRunningTime="2026-01-23 08:19:52.074602754 +0000 UTC m=+1466.554966140" Jan 23 08:19:52 crc kubenswrapper[4982]: I0123 08:19:52.089246 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-zmlhm" podStartSLOduration=2.089219405 podStartE2EDuration="2.089219405s" podCreationTimestamp="2026-01-23 08:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:19:52.085559327 +0000 UTC m=+1466.565922713" watchObservedRunningTime="2026-01-23 08:19:52.089219405 +0000 UTC m=+1466.569582791" Jan 23 08:19:56 crc kubenswrapper[4982]: I0123 08:19:56.420578 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:19:56 crc kubenswrapper[4982]: I0123 08:19:56.421124 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:19:57 crc kubenswrapper[4982]: I0123 08:19:57.092330 4982 generic.go:334] "Generic (PLEG): container finished" podID="967baf94-5c3d-4033-9156-ed4b453588fe" containerID="929fcb8adb709d932397978e38905f34dd9ae86e421d1c518211189261ab24d5" exitCode=0 Jan 23 08:19:57 crc kubenswrapper[4982]: I0123 08:19:57.092374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zmlhm" event={"ID":"967baf94-5c3d-4033-9156-ed4b453588fe","Type":"ContainerDied","Data":"929fcb8adb709d932397978e38905f34dd9ae86e421d1c518211189261ab24d5"} Jan 23 08:19:57 crc kubenswrapper[4982]: I0123 08:19:57.429810 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:57 crc kubenswrapper[4982]: I0123 08:19:57.429807 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.564691 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.599277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-config-data\") pod \"967baf94-5c3d-4033-9156-ed4b453588fe\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.599452 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjd6z\" (UniqueName: \"kubernetes.io/projected/967baf94-5c3d-4033-9156-ed4b453588fe-kube-api-access-bjd6z\") pod \"967baf94-5c3d-4033-9156-ed4b453588fe\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.599714 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-combined-ca-bundle\") pod \"967baf94-5c3d-4033-9156-ed4b453588fe\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.599813 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-scripts\") pod \"967baf94-5c3d-4033-9156-ed4b453588fe\" (UID: \"967baf94-5c3d-4033-9156-ed4b453588fe\") " Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.606683 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967baf94-5c3d-4033-9156-ed4b453588fe-kube-api-access-bjd6z" (OuterVolumeSpecName: "kube-api-access-bjd6z") pod "967baf94-5c3d-4033-9156-ed4b453588fe" (UID: "967baf94-5c3d-4033-9156-ed4b453588fe"). InnerVolumeSpecName "kube-api-access-bjd6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.610353 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-scripts" (OuterVolumeSpecName: "scripts") pod "967baf94-5c3d-4033-9156-ed4b453588fe" (UID: "967baf94-5c3d-4033-9156-ed4b453588fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.638807 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "967baf94-5c3d-4033-9156-ed4b453588fe" (UID: "967baf94-5c3d-4033-9156-ed4b453588fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.643961 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-config-data" (OuterVolumeSpecName: "config-data") pod "967baf94-5c3d-4033-9156-ed4b453588fe" (UID: "967baf94-5c3d-4033-9156-ed4b453588fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.701994 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjd6z\" (UniqueName: \"kubernetes.io/projected/967baf94-5c3d-4033-9156-ed4b453588fe-kube-api-access-bjd6z\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.702320 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.702329 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:58 crc kubenswrapper[4982]: I0123 08:19:58.702340 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/967baf94-5c3d-4033-9156-ed4b453588fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.114534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-zmlhm" event={"ID":"967baf94-5c3d-4033-9156-ed4b453588fe","Type":"ContainerDied","Data":"1de26b56309e5c756e64fee9e90578dac4eb1ee42e4be146b2010c49a21a357b"} Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.114575 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de26b56309e5c756e64fee9e90578dac4eb1ee42e4be146b2010c49a21a357b" Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.114603 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-zmlhm" Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.296339 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.296700 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-log" containerID="cri-o://14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab" gracePeriod=30 Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.296786 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-api" containerID="cri-o://e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db" gracePeriod=30 Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.308721 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.308968 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="98123d02-78b0-43af-af4f-4dc0ed1e8f9f" containerName="nova-scheduler-scheduler" containerID="cri-o://3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" gracePeriod=30 Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.348595 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.348951 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-log" containerID="cri-o://eecd25b5c6222983918d6c166a576ca782cb31bc1e7f7b06d3fb3e0bdf72c3f1" gracePeriod=30 Jan 23 08:19:59 crc kubenswrapper[4982]: I0123 08:19:59.349075 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-metadata" containerID="cri-o://0abee9de9e3f7bd932d52b3b2531ecd4e5324a8f478b7760751bbaa3349e584e" gracePeriod=30 Jan 23 08:20:00 crc kubenswrapper[4982]: I0123 08:20:00.127570 4982 generic.go:334] "Generic (PLEG): container finished" podID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerID="eecd25b5c6222983918d6c166a576ca782cb31bc1e7f7b06d3fb3e0bdf72c3f1" exitCode=143 Jan 23 08:20:00 crc kubenswrapper[4982]: I0123 08:20:00.127667 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c418e171-9f17-48e4-9269-e9b315b2a2dd","Type":"ContainerDied","Data":"eecd25b5c6222983918d6c166a576ca782cb31bc1e7f7b06d3fb3e0bdf72c3f1"} Jan 23 08:20:00 crc kubenswrapper[4982]: I0123 08:20:00.131351 4982 generic.go:334] "Generic (PLEG): container finished" podID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerID="14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab" exitCode=143 Jan 23 08:20:00 crc kubenswrapper[4982]: I0123 08:20:00.131398 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6107b5ed-2323-4fdf-92fd-8a45f5775c78","Type":"ContainerDied","Data":"14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab"} Jan 23 08:20:00 crc kubenswrapper[4982]: E0123 08:20:00.777446 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6 is running failed: container process not found" containerID="3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:20:00 crc kubenswrapper[4982]: E0123 08:20:00.781759 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6 is running failed: container process not found" containerID="3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:20:00 crc kubenswrapper[4982]: E0123 08:20:00.784350 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6 is running failed: container process not found" containerID="3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:20:00 crc kubenswrapper[4982]: E0123 08:20:00.784436 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="98123d02-78b0-43af-af4f-4dc0ed1e8f9f" containerName="nova-scheduler-scheduler" Jan 23 08:20:00 crc kubenswrapper[4982]: I0123 08:20:00.950654 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.063124 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrj47\" (UniqueName: \"kubernetes.io/projected/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-kube-api-access-lrj47\") pod \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.063609 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-config-data\") pod \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.063670 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-combined-ca-bundle\") pod \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\" (UID: \"98123d02-78b0-43af-af4f-4dc0ed1e8f9f\") " Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.072149 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-kube-api-access-lrj47" (OuterVolumeSpecName: "kube-api-access-lrj47") pod "98123d02-78b0-43af-af4f-4dc0ed1e8f9f" (UID: "98123d02-78b0-43af-af4f-4dc0ed1e8f9f"). InnerVolumeSpecName "kube-api-access-lrj47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.096770 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98123d02-78b0-43af-af4f-4dc0ed1e8f9f" (UID: "98123d02-78b0-43af-af4f-4dc0ed1e8f9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.108600 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-config-data" (OuterVolumeSpecName: "config-data") pod "98123d02-78b0-43af-af4f-4dc0ed1e8f9f" (UID: "98123d02-78b0-43af-af4f-4dc0ed1e8f9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.147592 4982 generic.go:334] "Generic (PLEG): container finished" podID="98123d02-78b0-43af-af4f-4dc0ed1e8f9f" containerID="3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" exitCode=0 Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.147661 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98123d02-78b0-43af-af4f-4dc0ed1e8f9f","Type":"ContainerDied","Data":"3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6"} Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.147695 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"98123d02-78b0-43af-af4f-4dc0ed1e8f9f","Type":"ContainerDied","Data":"cfaa097cd5baa542ae3f860d85cc707e4088b9b578e434950f7d1473744a950d"} Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.147688 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.147778 4982 scope.go:117] "RemoveContainer" containerID="3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.166224 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.166265 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.166276 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrj47\" (UniqueName: \"kubernetes.io/projected/98123d02-78b0-43af-af4f-4dc0ed1e8f9f-kube-api-access-lrj47\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.219698 4982 scope.go:117] "RemoveContainer" containerID="3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" Jan 23 08:20:01 crc kubenswrapper[4982]: E0123 08:20:01.220184 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6\": container with ID starting with 3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6 not found: ID does not exist" containerID="3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.220217 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6"} err="failed to get container status \"3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6\": rpc error: code = NotFound desc = could not find container \"3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6\": container with ID starting with 3cebc5b6f468126abd7f142b824bd8be7dd3ee08809a14bc96f1dfffa25348a6 not found: ID does not exist" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.230614 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.246649 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.258922 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:01 crc kubenswrapper[4982]: E0123 08:20:01.259612 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="967baf94-5c3d-4033-9156-ed4b453588fe" containerName="nova-manage" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.259652 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="967baf94-5c3d-4033-9156-ed4b453588fe" containerName="nova-manage" Jan 23 08:20:01 crc kubenswrapper[4982]: E0123 08:20:01.259694 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98123d02-78b0-43af-af4f-4dc0ed1e8f9f" containerName="nova-scheduler-scheduler" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.259703 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="98123d02-78b0-43af-af4f-4dc0ed1e8f9f" containerName="nova-scheduler-scheduler" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.259980 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="967baf94-5c3d-4033-9156-ed4b453588fe" containerName="nova-manage" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.260015 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="98123d02-78b0-43af-af4f-4dc0ed1e8f9f" containerName="nova-scheduler-scheduler" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.260953 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.265104 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.267891 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js4vs\" (UniqueName: \"kubernetes.io/projected/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-kube-api-access-js4vs\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.267978 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.268054 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-config-data\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.271816 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.369726 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-config-data\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.369887 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js4vs\" (UniqueName: \"kubernetes.io/projected/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-kube-api-access-js4vs\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.369943 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.373468 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.373548 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-config-data\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.386713 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js4vs\" (UniqueName: \"kubernetes.io/projected/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-kube-api-access-js4vs\") pod \"nova-scheduler-0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.586229 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:20:01 crc kubenswrapper[4982]: I0123 08:20:01.847713 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98123d02-78b0-43af-af4f-4dc0ed1e8f9f" path="/var/lib/kubelet/pods/98123d02-78b0-43af-af4f-4dc0ed1e8f9f/volumes" Jan 23 08:20:02 crc kubenswrapper[4982]: I0123 08:20:02.093338 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:02 crc kubenswrapper[4982]: I0123 08:20:02.171478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4675771-9ce7-4bd7-a992-4358cbfdc2a0","Type":"ContainerStarted","Data":"6da82b1aafa824208b62bb64ea4bde402ad931ab5bf29d7e00a3bbb34882fca8"} Jan 23 08:20:02 crc kubenswrapper[4982]: I0123 08:20:02.753565 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": read tcp 10.217.0.2:60290->10.217.0.220:8775: read: connection reset by peer" Jan 23 08:20:02 crc kubenswrapper[4982]: I0123 08:20:02.753574 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": read tcp 10.217.0.2:60280->10.217.0.220:8775: read: connection reset by peer" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.079444 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.222340 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q926t\" (UniqueName: \"kubernetes.io/projected/6107b5ed-2323-4fdf-92fd-8a45f5775c78-kube-api-access-q926t\") pod \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.223127 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-internal-tls-certs\") pod \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.223346 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-public-tls-certs\") pod \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.223423 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-combined-ca-bundle\") pod \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.223491 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107b5ed-2323-4fdf-92fd-8a45f5775c78-logs\") pod \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.223586 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-config-data\") pod \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\" (UID: \"6107b5ed-2323-4fdf-92fd-8a45f5775c78\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.227110 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6107b5ed-2323-4fdf-92fd-8a45f5775c78-logs" (OuterVolumeSpecName: "logs") pod "6107b5ed-2323-4fdf-92fd-8a45f5775c78" (UID: "6107b5ed-2323-4fdf-92fd-8a45f5775c78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.227421 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6107b5ed-2323-4fdf-92fd-8a45f5775c78-kube-api-access-q926t" (OuterVolumeSpecName: "kube-api-access-q926t") pod "6107b5ed-2323-4fdf-92fd-8a45f5775c78" (UID: "6107b5ed-2323-4fdf-92fd-8a45f5775c78"). InnerVolumeSpecName "kube-api-access-q926t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.253489 4982 generic.go:334] "Generic (PLEG): container finished" podID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerID="0abee9de9e3f7bd932d52b3b2531ecd4e5324a8f478b7760751bbaa3349e584e" exitCode=0 Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.253580 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c418e171-9f17-48e4-9269-e9b315b2a2dd","Type":"ContainerDied","Data":"0abee9de9e3f7bd932d52b3b2531ecd4e5324a8f478b7760751bbaa3349e584e"} Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.270857 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.280250 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4675771-9ce7-4bd7-a992-4358cbfdc2a0","Type":"ContainerStarted","Data":"0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f"} Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.309838 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-config-data" (OuterVolumeSpecName: "config-data") pod "6107b5ed-2323-4fdf-92fd-8a45f5775c78" (UID: "6107b5ed-2323-4fdf-92fd-8a45f5775c78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.311942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6107b5ed-2323-4fdf-92fd-8a45f5775c78" (UID: "6107b5ed-2323-4fdf-92fd-8a45f5775c78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.316075 4982 generic.go:334] "Generic (PLEG): container finished" podID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerID="e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db" exitCode=0 Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.316135 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6107b5ed-2323-4fdf-92fd-8a45f5775c78","Type":"ContainerDied","Data":"e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db"} Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.316170 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6107b5ed-2323-4fdf-92fd-8a45f5775c78","Type":"ContainerDied","Data":"131c3fbabd23fcd8c18a8fa443ec1b796d3599a7d311e45a543c968fd6db3940"} Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.316189 4982 scope.go:117] "RemoveContainer" containerID="e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.316324 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.330092 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q926t\" (UniqueName: \"kubernetes.io/projected/6107b5ed-2323-4fdf-92fd-8a45f5775c78-kube-api-access-q926t\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.330125 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.330134 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6107b5ed-2323-4fdf-92fd-8a45f5775c78-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.330144 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.391358 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6107b5ed-2323-4fdf-92fd-8a45f5775c78" (UID: "6107b5ed-2323-4fdf-92fd-8a45f5775c78"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.428268 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.428244685 podStartE2EDuration="2.428244685s" podCreationTimestamp="2026-01-23 08:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:20:03.399258361 +0000 UTC m=+1477.879621747" watchObservedRunningTime="2026-01-23 08:20:03.428244685 +0000 UTC m=+1477.908608071" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.431579 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-nova-metadata-tls-certs\") pod \"c418e171-9f17-48e4-9269-e9b315b2a2dd\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.431754 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-combined-ca-bundle\") pod \"c418e171-9f17-48e4-9269-e9b315b2a2dd\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.431989 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c418e171-9f17-48e4-9269-e9b315b2a2dd-logs\") pod \"c418e171-9f17-48e4-9269-e9b315b2a2dd\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.432122 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28q5k\" (UniqueName: \"kubernetes.io/projected/c418e171-9f17-48e4-9269-e9b315b2a2dd-kube-api-access-28q5k\") pod \"c418e171-9f17-48e4-9269-e9b315b2a2dd\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.432776 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-config-data\") pod \"c418e171-9f17-48e4-9269-e9b315b2a2dd\" (UID: \"c418e171-9f17-48e4-9269-e9b315b2a2dd\") " Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.433694 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.435876 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c418e171-9f17-48e4-9269-e9b315b2a2dd-logs" (OuterVolumeSpecName: "logs") pod "c418e171-9f17-48e4-9269-e9b315b2a2dd" (UID: "c418e171-9f17-48e4-9269-e9b315b2a2dd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.440017 4982 scope.go:117] "RemoveContainer" containerID="14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.440046 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6107b5ed-2323-4fdf-92fd-8a45f5775c78" (UID: "6107b5ed-2323-4fdf-92fd-8a45f5775c78"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.443503 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c418e171-9f17-48e4-9269-e9b315b2a2dd-kube-api-access-28q5k" (OuterVolumeSpecName: "kube-api-access-28q5k") pod "c418e171-9f17-48e4-9269-e9b315b2a2dd" (UID: "c418e171-9f17-48e4-9269-e9b315b2a2dd"). InnerVolumeSpecName "kube-api-access-28q5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.480779 4982 scope.go:117] "RemoveContainer" containerID="e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db" Jan 23 08:20:03 crc kubenswrapper[4982]: E0123 08:20:03.481337 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db\": container with ID starting with e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db not found: ID does not exist" containerID="e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.481391 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db"} err="failed to get container status \"e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db\": rpc error: code = NotFound desc = could not find container \"e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db\": container with ID starting with e4822af883ff63eaeedcf94608cd945fb3373baa834fb474f587ddcd8872c5db not found: ID does not exist" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.481422 4982 scope.go:117] "RemoveContainer" containerID="14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab" Jan 23 08:20:03 crc kubenswrapper[4982]: E0123 08:20:03.481729 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab\": container with ID starting with 14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab not found: ID does not exist" containerID="14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.481830 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab"} err="failed to get container status \"14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab\": rpc error: code = NotFound desc = could not find container \"14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab\": container with ID starting with 14b6b7b4129c07a59846313192842ab2be65ed493ffb63f45b8553faad5c9cab not found: ID does not exist" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.487963 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c418e171-9f17-48e4-9269-e9b315b2a2dd" (UID: "c418e171-9f17-48e4-9269-e9b315b2a2dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.491580 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-config-data" (OuterVolumeSpecName: "config-data") pod "c418e171-9f17-48e4-9269-e9b315b2a2dd" (UID: "c418e171-9f17-48e4-9269-e9b315b2a2dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.512478 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c418e171-9f17-48e4-9269-e9b315b2a2dd" (UID: "c418e171-9f17-48e4-9269-e9b315b2a2dd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.535782 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6107b5ed-2323-4fdf-92fd-8a45f5775c78-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.536045 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c418e171-9f17-48e4-9269-e9b315b2a2dd-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.536103 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28q5k\" (UniqueName: \"kubernetes.io/projected/c418e171-9f17-48e4-9269-e9b315b2a2dd-kube-api-access-28q5k\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.536327 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.536449 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.536534 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c418e171-9f17-48e4-9269-e9b315b2a2dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.651968 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.662799 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.672825 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:03 crc kubenswrapper[4982]: E0123 08:20:03.673309 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-api" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673326 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-api" Jan 23 08:20:03 crc kubenswrapper[4982]: E0123 08:20:03.673334 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-log" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673342 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-log" Jan 23 08:20:03 crc kubenswrapper[4982]: E0123 08:20:03.673359 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-log" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673367 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-log" Jan 23 08:20:03 crc kubenswrapper[4982]: E0123 08:20:03.673388 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-metadata" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673394 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-metadata" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673614 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-metadata" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673646 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-log" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673657 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" containerName="nova-metadata-log" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.673680 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" containerName="nova-api-api" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.674843 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.684359 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.684736 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.684725 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.694937 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.840201 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6107b5ed-2323-4fdf-92fd-8a45f5775c78" path="/var/lib/kubelet/pods/6107b5ed-2323-4fdf-92fd-8a45f5775c78/volumes" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.842402 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-config-data\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.842551 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.842647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.842737 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-public-tls-certs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.842870 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmnbw\" (UniqueName: \"kubernetes.io/projected/6d1bed17-dcb3-462e-8227-272c4c1fbc16-kube-api-access-hmnbw\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.842946 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1bed17-dcb3-462e-8227-272c4c1fbc16-logs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.947543 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.947642 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.947692 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-public-tls-certs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.947799 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmnbw\" (UniqueName: \"kubernetes.io/projected/6d1bed17-dcb3-462e-8227-272c4c1fbc16-kube-api-access-hmnbw\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.947855 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1bed17-dcb3-462e-8227-272c4c1fbc16-logs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.947933 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-config-data\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.949540 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1bed17-dcb3-462e-8227-272c4c1fbc16-logs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.951815 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-public-tls-certs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.965761 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.966273 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmnbw\" (UniqueName: \"kubernetes.io/projected/6d1bed17-dcb3-462e-8227-272c4c1fbc16-kube-api-access-hmnbw\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.967975 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-config-data\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.969209 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " pod="openstack/nova-api-0" Jan 23 08:20:03 crc kubenswrapper[4982]: I0123 08:20:03.995433 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.350818 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c418e171-9f17-48e4-9269-e9b315b2a2dd","Type":"ContainerDied","Data":"5a56fda07757e6da2902f9c4ef53c75b2798a52653df86a40dd17de1b0d9d945"} Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.350872 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.351173 4982 scope.go:117] "RemoveContainer" containerID="0abee9de9e3f7bd932d52b3b2531ecd4e5324a8f478b7760751bbaa3349e584e" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.387936 4982 scope.go:117] "RemoveContainer" containerID="eecd25b5c6222983918d6c166a576ca782cb31bc1e7f7b06d3fb3e0bdf72c3f1" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.406184 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.427776 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.436552 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.438474 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.440521 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.440794 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.450014 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.498536 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.563468 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7wf\" (UniqueName: \"kubernetes.io/projected/10c2a3b1-f4b9-4388-9261-fa703cfaf757-kube-api-access-sc7wf\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.563547 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-config-data\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.563948 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.564042 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c2a3b1-f4b9-4388-9261-fa703cfaf757-logs\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.564257 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.665955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.666062 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7wf\" (UniqueName: \"kubernetes.io/projected/10c2a3b1-f4b9-4388-9261-fa703cfaf757-kube-api-access-sc7wf\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.666114 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-config-data\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.666174 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.666204 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c2a3b1-f4b9-4388-9261-fa703cfaf757-logs\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.666669 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c2a3b1-f4b9-4388-9261-fa703cfaf757-logs\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.671468 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.673451 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-config-data\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.681262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.688521 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7wf\" (UniqueName: \"kubernetes.io/projected/10c2a3b1-f4b9-4388-9261-fa703cfaf757-kube-api-access-sc7wf\") pod \"nova-metadata-0\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " pod="openstack/nova-metadata-0" Jan 23 08:20:04 crc kubenswrapper[4982]: I0123 08:20:04.759707 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:20:05 crc kubenswrapper[4982]: W0123 08:20:05.253175 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10c2a3b1_f4b9_4388_9261_fa703cfaf757.slice/crio-43b1b00d8e65a1c42d0e08c00e6ffd4ffb634a0e8fde181a9084e037d64d933a WatchSource:0}: Error finding container 43b1b00d8e65a1c42d0e08c00e6ffd4ffb634a0e8fde181a9084e037d64d933a: Status 404 returned error can't find the container with id 43b1b00d8e65a1c42d0e08c00e6ffd4ffb634a0e8fde181a9084e037d64d933a Jan 23 08:20:05 crc kubenswrapper[4982]: I0123 08:20:05.258200 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:05 crc kubenswrapper[4982]: I0123 08:20:05.362369 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d1bed17-dcb3-462e-8227-272c4c1fbc16","Type":"ContainerStarted","Data":"d7c245dddb72e897f40f47d481d743ead593b7d7c2dc1672b3d530d7eb076629"} Jan 23 08:20:05 crc kubenswrapper[4982]: I0123 08:20:05.362423 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d1bed17-dcb3-462e-8227-272c4c1fbc16","Type":"ContainerStarted","Data":"3a4d51e189b74fd0d7cce186ffa8581a0e4f9c7c3df73a5e73547594eddaa7d7"} Jan 23 08:20:05 crc kubenswrapper[4982]: I0123 08:20:05.362435 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d1bed17-dcb3-462e-8227-272c4c1fbc16","Type":"ContainerStarted","Data":"bd1ba21f47b2385268cce380c1fb4827dfe12ae51c2bfe3691f31eb561a11667"} Jan 23 08:20:05 crc kubenswrapper[4982]: I0123 08:20:05.365294 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10c2a3b1-f4b9-4388-9261-fa703cfaf757","Type":"ContainerStarted","Data":"43b1b00d8e65a1c42d0e08c00e6ffd4ffb634a0e8fde181a9084e037d64d933a"} Jan 23 08:20:05 crc kubenswrapper[4982]: I0123 08:20:05.385732 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.385708345 podStartE2EDuration="2.385708345s" podCreationTimestamp="2026-01-23 08:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:20:05.381093702 +0000 UTC m=+1479.861457088" watchObservedRunningTime="2026-01-23 08:20:05.385708345 +0000 UTC m=+1479.866071741" Jan 23 08:20:05 crc kubenswrapper[4982]: I0123 08:20:05.859580 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c418e171-9f17-48e4-9269-e9b315b2a2dd" path="/var/lib/kubelet/pods/c418e171-9f17-48e4-9269-e9b315b2a2dd/volumes" Jan 23 08:20:06 crc kubenswrapper[4982]: I0123 08:20:06.378546 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10c2a3b1-f4b9-4388-9261-fa703cfaf757","Type":"ContainerStarted","Data":"675dfaa3136aeef07453de25344bb073c61967ea34a6dc134fc80aa1017e0b49"} Jan 23 08:20:06 crc kubenswrapper[4982]: I0123 08:20:06.378611 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10c2a3b1-f4b9-4388-9261-fa703cfaf757","Type":"ContainerStarted","Data":"f7d962ecdf637e775a21dfd3f37e7d877a96731f2de9d97ae4e8bd5571888259"} Jan 23 08:20:06 crc kubenswrapper[4982]: I0123 08:20:06.408384 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.408362004 podStartE2EDuration="2.408362004s" podCreationTimestamp="2026-01-23 08:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:20:06.400141854 +0000 UTC m=+1480.880505240" watchObservedRunningTime="2026-01-23 08:20:06.408362004 +0000 UTC m=+1480.888725380" Jan 23 08:20:06 crc kubenswrapper[4982]: I0123 08:20:06.587093 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 08:20:09 crc kubenswrapper[4982]: I0123 08:20:09.760560 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 08:20:09 crc kubenswrapper[4982]: I0123 08:20:09.761021 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 08:20:11 crc kubenswrapper[4982]: I0123 08:20:11.587348 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 08:20:11 crc kubenswrapper[4982]: I0123 08:20:11.619096 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 08:20:12 crc kubenswrapper[4982]: I0123 08:20:12.470694 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 08:20:13 crc kubenswrapper[4982]: I0123 08:20:13.997201 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:20:13 crc kubenswrapper[4982]: I0123 08:20:13.998461 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 08:20:14 crc kubenswrapper[4982]: I0123 08:20:14.761969 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 08:20:14 crc kubenswrapper[4982]: I0123 08:20:14.762010 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 08:20:15 crc kubenswrapper[4982]: I0123 08:20:15.024813 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 08:20:15 crc kubenswrapper[4982]: I0123 08:20:15.025115 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 08:20:15 crc kubenswrapper[4982]: I0123 08:20:15.765864 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 08:20:15 crc kubenswrapper[4982]: I0123 08:20:15.769890 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 08:20:15 crc kubenswrapper[4982]: I0123 08:20:15.909812 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 08:20:15 crc kubenswrapper[4982]: I0123 08:20:15.931747 4982 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod36962664-0eb8-4871-8cf2-953c55dae470"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod36962664-0eb8-4871-8cf2-953c55dae470] : Timed out while waiting for systemd to remove kubepods-besteffort-pod36962664_0eb8_4871_8cf2_953c55dae470.slice" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.004050 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.004611 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.004992 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.005023 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.011304 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.015664 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.765336 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.766101 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 08:20:24 crc kubenswrapper[4982]: I0123 08:20:24.769037 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 08:20:25 crc kubenswrapper[4982]: I0123 08:20:25.763044 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.190536 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xf498"] Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.193391 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.209711 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf498"] Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.355736 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-catalog-content\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.355886 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-utilities\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.355937 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsplb\" (UniqueName: \"kubernetes.io/projected/16911749-9c58-48fb-8f10-e60d8c4dc8f2-kube-api-access-dsplb\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.458620 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-catalog-content\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.459301 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-utilities\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.459435 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsplb\" (UniqueName: \"kubernetes.io/projected/16911749-9c58-48fb-8f10-e60d8c4dc8f2-kube-api-access-dsplb\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.459644 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-utilities\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.459327 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-catalog-content\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.496238 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsplb\" (UniqueName: \"kubernetes.io/projected/16911749-9c58-48fb-8f10-e60d8c4dc8f2-kube-api-access-dsplb\") pod \"redhat-marketplace-xf498\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:33 crc kubenswrapper[4982]: I0123 08:20:33.527281 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:34 crc kubenswrapper[4982]: I0123 08:20:34.073793 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf498"] Jan 23 08:20:34 crc kubenswrapper[4982]: I0123 08:20:34.695649 4982 generic.go:334] "Generic (PLEG): container finished" podID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerID="810ee3ae6127d10449b9bdf1eb435107a0333b102866f4edefef505018126638" exitCode=0 Jan 23 08:20:34 crc kubenswrapper[4982]: I0123 08:20:34.696031 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf498" event={"ID":"16911749-9c58-48fb-8f10-e60d8c4dc8f2","Type":"ContainerDied","Data":"810ee3ae6127d10449b9bdf1eb435107a0333b102866f4edefef505018126638"} Jan 23 08:20:34 crc kubenswrapper[4982]: I0123 08:20:34.696068 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf498" event={"ID":"16911749-9c58-48fb-8f10-e60d8c4dc8f2","Type":"ContainerStarted","Data":"8b10ee86255dc1a41fa32b13f781079158a2c81d6fc7cbb9938120aae20b3752"} Jan 23 08:20:36 crc kubenswrapper[4982]: I0123 08:20:36.725651 4982 generic.go:334] "Generic (PLEG): container finished" podID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerID="f3d20b18f910a32517f159b80557a4d4be51a59dc7342da886f6d9a6ed60c6d4" exitCode=0 Jan 23 08:20:36 crc kubenswrapper[4982]: I0123 08:20:36.725746 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf498" event={"ID":"16911749-9c58-48fb-8f10-e60d8c4dc8f2","Type":"ContainerDied","Data":"f3d20b18f910a32517f159b80557a4d4be51a59dc7342da886f6d9a6ed60c6d4"} Jan 23 08:20:37 crc kubenswrapper[4982]: I0123 08:20:37.751139 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf498" event={"ID":"16911749-9c58-48fb-8f10-e60d8c4dc8f2","Type":"ContainerStarted","Data":"244fc34fcda1a8f51ba5027f68f3edab915eac4fd222494dabe55f97be2d3fd1"} Jan 23 08:20:37 crc kubenswrapper[4982]: I0123 08:20:37.783056 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xf498" podStartSLOduration=2.318019438 podStartE2EDuration="4.783030252s" podCreationTimestamp="2026-01-23 08:20:33 +0000 UTC" firstStartedPulling="2026-01-23 08:20:34.698103894 +0000 UTC m=+1509.178467280" lastFinishedPulling="2026-01-23 08:20:37.163114708 +0000 UTC m=+1511.643478094" observedRunningTime="2026-01-23 08:20:37.77020832 +0000 UTC m=+1512.250571736" watchObservedRunningTime="2026-01-23 08:20:37.783030252 +0000 UTC m=+1512.263393688" Jan 23 08:20:43 crc kubenswrapper[4982]: I0123 08:20:43.529336 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:43 crc kubenswrapper[4982]: I0123 08:20:43.530095 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:43 crc kubenswrapper[4982]: I0123 08:20:43.583707 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:43 crc kubenswrapper[4982]: I0123 08:20:43.866904 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:43 crc kubenswrapper[4982]: I0123 08:20:43.922566 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf498"] Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.667992 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.668571 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" containerName="openstackclient" containerID="cri-o://5681b28d1051a5f8acd2929e33682deb65b19b740cd2b6c2b2d8317baaa49110" gracePeriod=2 Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.720216 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.862081 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ee09-account-create-update-vv5hs"] Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.895987 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ee09-account-create-update-vv5hs"] Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.905072 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.948817 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6759-account-create-update-rmsm9"] Jan 23 08:20:44 crc kubenswrapper[4982]: I0123 08:20:44.967772 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6759-account-create-update-rmsm9"] Jan 23 08:20:45 crc kubenswrapper[4982]: E0123 08:20:45.150174 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 23 08:20:45 crc kubenswrapper[4982]: E0123 08:20:45.150274 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data podName:26d23dde-8aac-481b-bfdb-ce3315166bc4 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:45.650238411 +0000 UTC m=+1520.130601797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data") pod "rabbitmq-server-0" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4") : configmap "rabbitmq-config-data" not found Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.211462 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-6ksf4"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.257279 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qdkmz"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.302486 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bf3c-account-create-update-5lf2k"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.320694 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bf3c-account-create-update-5lf2k"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.367983 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-jvgnd"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.368233 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-jvgnd" podUID="95b0abeb-729f-49a9-8d9f-d931ec9f0814" containerName="openstack-network-exporter" containerID="cri-o://6d3c5732d58d4047ed886c95a750296b113ce1ccf59d5efffc3d6754ac5aa0bb" gracePeriod=30 Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.425092 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.443884 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fsxbv"] Jan 23 08:20:45 crc kubenswrapper[4982]: E0123 08:20:45.444440 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" containerName="openstackclient" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.444454 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" containerName="openstackclient" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.444733 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" containerName="openstackclient" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.445498 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.453010 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.479051 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fsxbv"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.551700 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-slkch"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.570159 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-slkch"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.580392 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b09c921-bbfa-4934-9c8b-ffdd0f294481-operator-scripts\") pod \"root-account-create-update-fsxbv\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.580525 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99xln\" (UniqueName: \"kubernetes.io/projected/4b09c921-bbfa-4934-9c8b-ffdd0f294481-kube-api-access-99xln\") pod \"root-account-create-update-fsxbv\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: E0123 08:20:45.582490 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:45 crc kubenswrapper[4982]: E0123 08:20:45.582542 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data podName:0b4b8957-4a36-4d8b-a591-fdcbe696c808 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:46.082527235 +0000 UTC m=+1520.562890621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data") pod "rabbitmq-cell1-server-0" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808") : configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.583104 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-7lnhw"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.597906 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-7lnhw"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.618250 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l8fs8"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.634415 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l8fs8"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.660842 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1720-account-create-update-5xfq9"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.685004 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99xln\" (UniqueName: \"kubernetes.io/projected/4b09c921-bbfa-4934-9c8b-ffdd0f294481-kube-api-access-99xln\") pod \"root-account-create-update-fsxbv\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.685247 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b09c921-bbfa-4934-9c8b-ffdd0f294481-operator-scripts\") pod \"root-account-create-update-fsxbv\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.685930 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b09c921-bbfa-4934-9c8b-ffdd0f294481-operator-scripts\") pod \"root-account-create-update-fsxbv\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: E0123 08:20:45.686008 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 23 08:20:45 crc kubenswrapper[4982]: E0123 08:20:45.686277 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data podName:26d23dde-8aac-481b-bfdb-ce3315166bc4 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:46.686259795 +0000 UTC m=+1521.166623181 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data") pod "rabbitmq-server-0" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4") : configmap "rabbitmq-config-data" not found Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.695852 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1720-account-create-update-5xfq9"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.705858 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b9da-account-create-update-f8dpr"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.717312 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b9da-account-create-update-f8dpr"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.745400 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99xln\" (UniqueName: \"kubernetes.io/projected/4b09c921-bbfa-4934-9c8b-ffdd0f294481-kube-api-access-99xln\") pod \"root-account-create-update-fsxbv\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.790416 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-f2tkd"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.891427 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f789991-fa7a-45f6-a59e-9148eef61221" path="/var/lib/kubelet/pods/2f789991-fa7a-45f6-a59e-9148eef61221/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.892607 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3351eff8-fcf1-4b8f-a12d-420d953010f0" path="/var/lib/kubelet/pods/3351eff8-fcf1-4b8f-a12d-420d953010f0/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.893573 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="337e4f54-165a-4b51-90f9-32c8599bc1e3" path="/var/lib/kubelet/pods/337e4f54-165a-4b51-90f9-32c8599bc1e3/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.894355 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c0742f3-a1af-45c5-aa3f-a7cf66d4796b" path="/var/lib/kubelet/pods/3c0742f3-a1af-45c5-aa3f-a7cf66d4796b/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.894495 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jvgnd_95b0abeb-729f-49a9-8d9f-d931ec9f0814/openstack-network-exporter/0.log" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.894551 4982 generic.go:334] "Generic (PLEG): container finished" podID="95b0abeb-729f-49a9-8d9f-d931ec9f0814" containerID="6d3c5732d58d4047ed886c95a750296b113ce1ccf59d5efffc3d6754ac5aa0bb" exitCode=2 Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.894756 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xf498" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="registry-server" containerID="cri-o://244fc34fcda1a8f51ba5027f68f3edab915eac4fd222494dabe55f97be2d3fd1" gracePeriod=2 Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.895474 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a06e0c-0060-42b6-a09c-a805ea7b2878" path="/var/lib/kubelet/pods/77a06e0c-0060-42b6-a09c-a805ea7b2878/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.896092 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95438dd4-92f1-4e6f-bff5-2b3471920e2f" path="/var/lib/kubelet/pods/95438dd4-92f1-4e6f-bff5-2b3471920e2f/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.896618 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e66ea6-0018-46c2-9c01-70f84bc2f166" path="/var/lib/kubelet/pods/b8e66ea6-0018-46c2-9c01-70f84bc2f166/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.899219 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ace387-f81f-4fe6-a4ca-3b34b6ea158d" path="/var/lib/kubelet/pods/f9ace387-f81f-4fe6-a4ca-3b34b6ea158d/volumes" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.900086 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-f2tkd"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.900113 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.900141 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jvgnd" event={"ID":"95b0abeb-729f-49a9-8d9f-d931ec9f0814","Type":"ContainerDied","Data":"6d3c5732d58d4047ed886c95a750296b113ce1ccf59d5efffc3d6754ac5aa0bb"} Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.900484 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="openstack-network-exporter" containerID="cri-o://61f6f16773111fc5cd5315ce04a7454bb22686d399b1293d696bebdc325184ec" gracePeriod=300 Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.932265 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-83ff-account-create-update-9nrqn"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.950080 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.974697 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-83ff-account-create-update-9nrqn"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.996848 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.997153 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="ovn-northd" containerID="cri-o://9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" gracePeriod=30 Jan 23 08:20:45 crc kubenswrapper[4982]: I0123 08:20:45.997290 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="openstack-network-exporter" containerID="cri-o://1cdfe73af7e6305c3c12db7d9df0b35dbe1da5d1128309e49095c22ea2966020" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.010125 4982 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-qdkmz" message=< Jan 23 08:20:46 crc kubenswrapper[4982]: Exiting ovn-controller (1) [ OK ] Jan 23 08:20:46 crc kubenswrapper[4982]: > Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.010167 4982 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-qdkmz" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" containerID="cri-o://fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e" Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.010212 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-qdkmz" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" containerID="cri-o://fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.041721 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mj296"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.093831 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mj296"] Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.101650 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.101721 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data podName:0b4b8957-4a36-4d8b-a591-fdcbe696c808 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:47.101701779 +0000 UTC m=+1521.582065165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data") pod "rabbitmq-cell1-server-0" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808") : configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.139744 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ppl7s"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.189697 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ppl7s"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.297341 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-zmlhm"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.379736 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-zmlhm"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.564873 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.565910 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="openstack-network-exporter" containerID="cri-o://2fd889b537634e373cd79c15112db577cb478310dae8eb3f23acd8930c050163" gracePeriod=300 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.589702 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2642-account-create-update-29gw9"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.616032 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2642-account-create-update-29gw9"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.654053 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="ovsdbserver-sb" containerID="cri-o://3cd9a32a141012cf4bba8863a14bb8c7f3122d23d541a734493f06ad3ca70f53" gracePeriod=300 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.666727 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4wwwj"] Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.708751 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.710040 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data podName:26d23dde-8aac-481b-bfdb-ce3315166bc4 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:48.710021053 +0000 UTC m=+1523.190384439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data") pod "rabbitmq-server-0" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4") : configmap "rabbitmq-config-data" not found Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.723147 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4wwwj"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.790690 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.791028 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-log" containerID="cri-o://2ae97d13aafab9b78c55d20b89d369229f390d7aed5aea78a7522b15b9f48310" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.791613 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-httpd" containerID="cri-o://f85206e6ea38cc3b96a676eb48cd6ab7fe8f14acdcd1691d0be85962d1d2a103" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.819600 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ddf676c6b-tgrs7"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.822703 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-ddf676c6b-tgrs7" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-log" containerID="cri-o://aeab8722bcf476c6445e671d68cf2b7cfff2199e663f2a397789bae6c713a5ba" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.822861 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-ddf676c6b-tgrs7" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-api" containerID="cri-o://3e9ea0f153b90fef5a0f09d7ff49a929afa3ab0bee839a8bf8bff02973243915" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.845155 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e is running failed: container process not found" containerID="fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.855570 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e is running failed: container process not found" containerID="fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.859881 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e is running failed: container process not found" containerID="fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 08:20:46 crc kubenswrapper[4982]: E0123 08:20:46.859936 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-qdkmz" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.868271 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-7xfzb"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.868557 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerName="dnsmasq-dns" containerID="cri-o://39f9cc3dea990a44f6526e8d44212606fd2bd98975af302178a764d6956a1b61" gracePeriod=10 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.888688 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f7c478fb9-56vcd"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.895497 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f7c478fb9-56vcd" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-api" containerID="cri-o://af380f9803b5a0cc4d7580e7e89b720729e3c7d4ba623f631701258b3aa79272" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.895685 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f7c478fb9-56vcd" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-httpd" containerID="cri-o://447acfd4c048d55e1f08058b4c91272ee9f4e8985a1b8beb44df17770342a816" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.947058 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="ovsdbserver-nb" containerID="cri-o://5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515" gracePeriod=300 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.969561 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.969924 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-log" containerID="cri-o://98cf2cebef5f1b732d6a19c80cc61916d6f7fb654b98a4e5043b8c16409a7e5f" gracePeriod=30 Jan 23 08:20:46 crc kubenswrapper[4982]: I0123 08:20:46.970502 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-httpd" containerID="cri-o://3cd84b06fc466f0e513e71745eb756df662bac487350768995821b496abc50d0" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.007333 4982 generic.go:334] "Generic (PLEG): container finished" podID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerID="244fc34fcda1a8f51ba5027f68f3edab915eac4fd222494dabe55f97be2d3fd1" exitCode=0 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.007454 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf498" event={"ID":"16911749-9c58-48fb-8f10-e60d8c4dc8f2","Type":"ContainerDied","Data":"244fc34fcda1a8f51ba5027f68f3edab915eac4fd222494dabe55f97be2d3fd1"} Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.023707 4982 generic.go:334] "Generic (PLEG): container finished" podID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerID="1cdfe73af7e6305c3c12db7d9df0b35dbe1da5d1128309e49095c22ea2966020" exitCode=2 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.023768 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06","Type":"ContainerDied","Data":"1cdfe73af7e6305c3c12db7d9df0b35dbe1da5d1128309e49095c22ea2966020"} Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.085056 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-mqmqp"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.103797 4982 generic.go:334] "Generic (PLEG): container finished" podID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerID="fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e" exitCode=0 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.103892 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz" event={"ID":"10c1d9ff-78da-4916-b01b-e4eb2935c4c3","Type":"ContainerDied","Data":"fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e"} Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.107500 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kjfz5"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.119481 4982 generic.go:334] "Generic (PLEG): container finished" podID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerID="61f6f16773111fc5cd5315ce04a7454bb22686d399b1293d696bebdc325184ec" exitCode=2 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.119605 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"07620928-c400-47dd-b76d-25f31c3c1e0e","Type":"ContainerDied","Data":"61f6f16773111fc5cd5315ce04a7454bb22686d399b1293d696bebdc325184ec"} Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.120084 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kjfz5"] Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.144996 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.145063 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data podName:0b4b8957-4a36-4d8b-a591-fdcbe696c808 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:49.14504752 +0000 UTC m=+1523.625410896 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data") pod "rabbitmq-cell1-server-0" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808") : configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.145389 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-mqmqp"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.163954 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.182014 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.182573 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-server" containerID="cri-o://54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183159 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="swift-recon-cron" containerID="cri-o://54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183224 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="rsync" containerID="cri-o://79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183273 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-expirer" containerID="cri-o://de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183355 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-updater" containerID="cri-o://0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183398 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-auditor" containerID="cri-o://d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183443 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-replicator" containerID="cri-o://f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183505 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-reaper" containerID="cri-o://49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183526 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-server" containerID="cri-o://674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183494 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-server" containerID="cri-o://3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183598 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-auditor" containerID="cri-o://16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183611 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-updater" containerID="cri-o://4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183673 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-replicator" containerID="cri-o://18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183692 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-auditor" containerID="cri-o://2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.183733 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-replicator" containerID="cri-o://7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.215392 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.215722 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-log" containerID="cri-o://f7d962ecdf637e775a21dfd3f37e7d877a96731f2de9d97ae4e8bd5571888259" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.216308 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-metadata" containerID="cri-o://675dfaa3136aeef07453de25344bb073c61967ea34a6dc134fc80aa1017e0b49" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.286159 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.304537 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8122-account-create-update-qdxp2"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.322494 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8122-account-create-update-qdxp2"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.350287 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="rabbitmq" containerID="cri-o://45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f" gracePeriod=604800 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.387685 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.387961 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-log" containerID="cri-o://3a4d51e189b74fd0d7cce186ffa8581a0e4f9c7c3df73a5e73547594eddaa7d7" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.388089 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-api" containerID="cri-o://d7c245dddb72e897f40f47d481d743ead593b7d7c2dc1672b3d530d7eb076629" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.406944 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gmmx2"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.420783 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7rttj"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.446841 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gmmx2"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.472462 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8scdh"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.489951 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7rttj"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.502779 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8scdh"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.513437 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xtlkt"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.519845 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.520162 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api-log" containerID="cri-o://40ca415939a7e1e94f0012bb346eed19b90d1b52ec23f37e175fe94046754239" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.520737 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api" containerID="cri-o://0b613fdca688bd3035970b7dc6b487ea56dac750dadeb82a2a1db42250f517a7" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.547277 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xtlkt"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.570730 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.571138 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="cinder-scheduler" containerID="cri-o://d59b5dc7ed6c69bba6c7d620f3e44ff6159fc8950b7d39dcc371af91ae5c4782" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.572030 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="probe" containerID="cri-o://2fdf936f82b58284aa5deb1d6981c7ab39aeaacc42c4318726f6a5011a3f326d" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.580920 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kfcmb"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.591164 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kfcmb"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.597174 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" containerID="cri-o://3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" gracePeriod=28 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.620090 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-r4ns4"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.634981 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-r4ns4"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.653560 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qdxbw"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.719763 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.225:5353: connect: connection refused" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.772931 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" containerID="cri-o://52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" gracePeriod=28 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.773739 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qdxbw"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.858572 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="be869517-a530-472f-b11e-59558aafc066" containerName="galera" containerID="cri-o://881b906ef1be3ae613692e976ccc55331837c84c9820bb020dc06fe11f437493" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.862495 4982 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/swift-proxy-696995d9b9-k2q7r" secret="" err="secret \"swift-swift-dockercfg-kq2tm\" not found" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.883471 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06516bc7-9cda-4228-98b9-4e52fcd3bfa1" path="/var/lib/kubelet/pods/06516bc7-9cda-4228-98b9-4e52fcd3bfa1/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.884594 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a32c4a5-531d-4ce6-a01e-eb3889d85751" path="/var/lib/kubelet/pods/3a32c4a5-531d-4ce6-a01e-eb3889d85751/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.885245 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d3dcf6f-a3f4-4cd8-848b-398bb9338b24" path="/var/lib/kubelet/pods/3d3dcf6f-a3f4-4cd8-848b-398bb9338b24/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.886012 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c" path="/var/lib/kubelet/pods/422260e7-5cc6-4ad1-9b7b-d3eccd13fa4c/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.887598 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da6f6c3-ea0b-47d7-805c-c55b43c96c1d" path="/var/lib/kubelet/pods/4da6f6c3-ea0b-47d7-805c-c55b43c96c1d/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.889551 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bbfefec-4285-4b91-a9d8-5dc47c0efb34" path="/var/lib/kubelet/pods/5bbfefec-4285-4b91-a9d8-5dc47c0efb34/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.890536 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df7e23c-1217-4b12-b821-ddd7de9bbf97" path="/var/lib/kubelet/pods/5df7e23c-1217-4b12-b821-ddd7de9bbf97/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.892095 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85a6abf4-6ced-43b8-9b90-f525d6ed23a7" path="/var/lib/kubelet/pods/85a6abf4-6ced-43b8-9b90-f525d6ed23a7/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.893027 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87959597-4d4c-4063-9bcc-a91c597281f6" path="/var/lib/kubelet/pods/87959597-4d4c-4063-9bcc-a91c597281f6/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.909776 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967baf94-5c3d-4033-9156-ed4b453588fe" path="/var/lib/kubelet/pods/967baf94-5c3d-4033-9156-ed4b453588fe/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.910838 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b061f3a-644c-426d-8c3f-d53f664a2703" path="/var/lib/kubelet/pods/9b061f3a-644c-426d-8c3f-d53f664a2703/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.911548 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7300e5-e28d-4690-b05f-1cfb0c034c71" path="/var/lib/kubelet/pods/9b7300e5-e28d-4690-b05f-1cfb0c034c71/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.919695 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c680d8da-c090-4c7e-8068-baecb59402e8" path="/var/lib/kubelet/pods/c680d8da-c090-4c7e-8068-baecb59402e8/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.933845 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf975c1-5014-40e8-adfe-55d5d0c5bdd5" path="/var/lib/kubelet/pods/cbf975c1-5014-40e8-adfe-55d5d0c5bdd5/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.940898 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc972f61-262c-4435-b54a-5705e0c00c8c" path="/var/lib/kubelet/pods/dc972f61-262c-4435-b54a-5705e0c00c8c/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.947761 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e046e631-fa39-4492-8774-1976548076d7" path="/var/lib/kubelet/pods/e046e631-fa39-4492-8774-1976548076d7/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.948426 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e338b64b-08c9-4c05-b50d-4d476391ef43" path="/var/lib/kubelet/pods/e338b64b-08c9-4c05-b50d-4d476391ef43/volumes" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.949703 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.949764 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6959c78bf5-trn2r"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.949786 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.949845 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7965657874-h2sgm"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.951348 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener-log" containerID="cri-o://74a7b42a19564dc748ab37edddb7525e4b1a173e9cc8b1eb1fb22abc814887a5" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.951579 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6959c78bf5-trn2r" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker-log" containerID="cri-o://db343ad642c9cc837f707a4e56a8a8c708cc25fec21d7f114d380fb4ad3f1acf" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.952063 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener" containerID="cri-o://121ac015dcf20bdb873bb6bfe8f528431aaf6cd7e2e7cfb1ebc3aff798e014ee" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.952233 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6959c78bf5-trn2r" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker" containerID="cri-o://bd096c9a64adc5e24222bd520105f18c192474be440d5f8370a222b01b9f8c41" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.954504 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7965657874-h2sgm" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api-log" containerID="cri-o://44123210e26061b640cdcc4b3b9dc1005507bd1badfae509c93d9f329c1f4807" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.954709 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7965657874-h2sgm" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api" containerID="cri-o://cf9c6471bb4e01bd9fd153522ac53f92eade7e364f44fd40abf8bdcad7b61af8" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.967030 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fsxbv"] Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.976756 4982 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.976792 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.976807 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-696995d9b9-k2q7r: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.976860 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift podName:740c5af1-ac3c-431a-ab6e-c5aecf933fea nodeName:}" failed. No retries permitted until 2026-01-23 08:20:48.476839601 +0000 UTC m=+1522.957203037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift") pod "swift-proxy-696995d9b9-k2q7r" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.982138 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.982397 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="996235df-739a-4029-96df-88e45a44a38c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5" gracePeriod=30 Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.984689 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 23 08:20:47 crc kubenswrapper[4982]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 23 08:20:47 crc kubenswrapper[4982]: Jan 23 08:20:47 crc kubenswrapper[4982]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 23 08:20:47 crc kubenswrapper[4982]: Jan 23 08:20:47 crc kubenswrapper[4982]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 23 08:20:47 crc kubenswrapper[4982]: Jan 23 08:20:47 crc kubenswrapper[4982]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 23 08:20:47 crc kubenswrapper[4982]: Jan 23 08:20:47 crc kubenswrapper[4982]: if [ -n "" ]; then Jan 23 08:20:47 crc kubenswrapper[4982]: GRANT_DATABASE="" Jan 23 08:20:47 crc kubenswrapper[4982]: else Jan 23 08:20:47 crc kubenswrapper[4982]: GRANT_DATABASE="*" Jan 23 08:20:47 crc kubenswrapper[4982]: fi Jan 23 08:20:47 crc kubenswrapper[4982]: Jan 23 08:20:47 crc kubenswrapper[4982]: # going for maximum compatibility here: Jan 23 08:20:47 crc kubenswrapper[4982]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 23 08:20:47 crc kubenswrapper[4982]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 23 08:20:47 crc kubenswrapper[4982]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 23 08:20:47 crc kubenswrapper[4982]: # support updates Jan 23 08:20:47 crc kubenswrapper[4982]: Jan 23 08:20:47 crc kubenswrapper[4982]: $MYSQL_CMD < logger="UnhandledError" Jan 23 08:20:47 crc kubenswrapper[4982]: E0123 08:20:47.985862 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-fsxbv" podUID="4b09c921-bbfa-4934-9c8b-ffdd0f294481" Jan 23 08:20:47 crc kubenswrapper[4982]: I0123 08:20:47.998879 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ppn57"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.000543 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jvgnd_95b0abeb-729f-49a9-8d9f-d931ec9f0814/openstack-network-exporter/0.log" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.000616 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.033016 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.033546 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="271fae4a-bd02-4281-a588-2784769ed552" containerName="nova-cell0-conductor-conductor" containerID="cri-o://6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278" gracePeriod=30 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.053993 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ppn57"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.063115 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.063386 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="3ad26b54-da5c-40d5-a715-b47a4f59b4a7" containerName="nova-cell1-conductor-conductor" containerID="cri-o://4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" gracePeriod=30 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.076589 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-52hsl"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.077602 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-metrics-certs-tls-certs\") pod \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.079157 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovn-rundir\") pod \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.079289 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovs-rundir\") pod \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.079414 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75glv\" (UniqueName: \"kubernetes.io/projected/95b0abeb-729f-49a9-8d9f-d931ec9f0814-kube-api-access-75glv\") pod \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.079492 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b0abeb-729f-49a9-8d9f-d931ec9f0814-config\") pod \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.079554 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-combined-ca-bundle\") pod \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\" (UID: \"95b0abeb-729f-49a9-8d9f-d931ec9f0814\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.079723 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "95b0abeb-729f-49a9-8d9f-d931ec9f0814" (UID: "95b0abeb-729f-49a9-8d9f-d931ec9f0814"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.079767 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "95b0abeb-729f-49a9-8d9f-d931ec9f0814" (UID: "95b0abeb-729f-49a9-8d9f-d931ec9f0814"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.080787 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b0abeb-729f-49a9-8d9f-d931ec9f0814-config" (OuterVolumeSpecName: "config") pod "95b0abeb-729f-49a9-8d9f-d931ec9f0814" (UID: "95b0abeb-729f-49a9-8d9f-d931ec9f0814"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.080997 4982 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.081024 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b0abeb-729f-49a9-8d9f-d931ec9f0814-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.081061 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/95b0abeb-729f-49a9-8d9f-d931ec9f0814-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.086827 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b0abeb-729f-49a9-8d9f-d931ec9f0814-kube-api-access-75glv" (OuterVolumeSpecName: "kube-api-access-75glv") pod "95b0abeb-729f-49a9-8d9f-d931ec9f0814" (UID: "95b0abeb-729f-49a9-8d9f-d931ec9f0814"). InnerVolumeSpecName "kube-api-access-75glv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.097923 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-52hsl"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.102138 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerName="rabbitmq" containerID="cri-o://2a7e5d12bd1b08f1897cad70a1c21dbddaaabae2025c8884bde0a2e110eb49ed" gracePeriod=604800 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.116009 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.116326 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d4675771-9ce7-4bd7-a992-4358cbfdc2a0" containerName="nova-scheduler-scheduler" containerID="cri-o://0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f" gracePeriod=30 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.125281 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fsxbv"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.147319 4982 generic.go:334] "Generic (PLEG): container finished" podID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerID="aeab8722bcf476c6445e671d68cf2b7cfff2199e663f2a397789bae6c713a5ba" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.147387 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf676c6b-tgrs7" event={"ID":"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b","Type":"ContainerDied","Data":"aeab8722bcf476c6445e671d68cf2b7cfff2199e663f2a397789bae6c713a5ba"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.152584 4982 generic.go:334] "Generic (PLEG): container finished" podID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerID="447acfd4c048d55e1f08058b4c91272ee9f4e8985a1b8beb44df17770342a816" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.152666 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c478fb9-56vcd" event={"ID":"5448f946-ae45-4ef2-84fb-d848a335c81e","Type":"ContainerDied","Data":"447acfd4c048d55e1f08058b4c91272ee9f4e8985a1b8beb44df17770342a816"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.161787 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95b0abeb-729f-49a9-8d9f-d931ec9f0814" (UID: "95b0abeb-729f-49a9-8d9f-d931ec9f0814"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.170317 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jvgnd_95b0abeb-729f-49a9-8d9f-d931ec9f0814/openstack-network-exporter/0.log" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.170419 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jvgnd" event={"ID":"95b0abeb-729f-49a9-8d9f-d931ec9f0814","Type":"ContainerDied","Data":"c6b6c69eac048ad07ae465b119d56644b1b635f2eb06935f69539c1f83e0c20c"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.170457 4982 scope.go:117] "RemoveContainer" containerID="6d3c5732d58d4047ed886c95a750296b113ce1ccf59d5efffc3d6754ac5aa0bb" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.170580 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jvgnd" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.183044 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75glv\" (UniqueName: \"kubernetes.io/projected/95b0abeb-729f-49a9-8d9f-d931ec9f0814-kube-api-access-75glv\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.183074 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.197108 4982 generic.go:334] "Generic (PLEG): container finished" podID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerID="3a4d51e189b74fd0d7cce186ffa8581a0e4f9c7c3df73a5e73547594eddaa7d7" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.197189 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d1bed17-dcb3-462e-8227-272c4c1fbc16","Type":"ContainerDied","Data":"3a4d51e189b74fd0d7cce186ffa8581a0e4f9c7c3df73a5e73547594eddaa7d7"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.268061 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9/ovsdbserver-nb/0.log" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.268109 4982 generic.go:334] "Generic (PLEG): container finished" podID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerID="2fd889b537634e373cd79c15112db577cb478310dae8eb3f23acd8930c050163" exitCode=2 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.268131 4982 generic.go:334] "Generic (PLEG): container finished" podID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerID="5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.268228 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9","Type":"ContainerDied","Data":"2fd889b537634e373cd79c15112db577cb478310dae8eb3f23acd8930c050163"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.268264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9","Type":"ContainerDied","Data":"5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.275667 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "95b0abeb-729f-49a9-8d9f-d931ec9f0814" (UID: "95b0abeb-729f-49a9-8d9f-d931ec9f0814"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.286966 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/95b0abeb-729f-49a9-8d9f-d931ec9f0814-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.287438 4982 generic.go:334] "Generic (PLEG): container finished" podID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerID="2ae97d13aafab9b78c55d20b89d369229f390d7aed5aea78a7522b15b9f48310" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.287541 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a96cf7-6cad-496c-9aa6-523fce7af3ff","Type":"ContainerDied","Data":"2ae97d13aafab9b78c55d20b89d369229f390d7aed5aea78a7522b15b9f48310"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.292313 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.295407 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_07620928-c400-47dd-b76d-25f31c3c1e0e/ovsdbserver-sb/0.log" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.295460 4982 generic.go:334] "Generic (PLEG): container finished" podID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerID="3cd9a32a141012cf4bba8863a14bb8c7f3122d23d541a734493f06ad3ca70f53" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.295534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"07620928-c400-47dd-b76d-25f31c3c1e0e","Type":"ContainerDied","Data":"3cd9a32a141012cf4bba8863a14bb8c7f3122d23d541a734493f06ad3ca70f53"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.296300 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.303743 4982 generic.go:334] "Generic (PLEG): container finished" podID="ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" containerID="5681b28d1051a5f8acd2929e33682deb65b19b740cd2b6c2b2d8317baaa49110" exitCode=137 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.316329 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.325966 4982 generic.go:334] "Generic (PLEG): container finished" podID="497bf205-5985-411f-960e-2364563ae9e1" containerID="98cf2cebef5f1b732d6a19c80cc61916d6f7fb654b98a4e5043b8c16409a7e5f" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.326071 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"497bf205-5985-411f-960e-2364563ae9e1","Type":"ContainerDied","Data":"98cf2cebef5f1b732d6a19c80cc61916d6f7fb654b98a4e5043b8c16409a7e5f"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.338984 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerID="40ca415939a7e1e94f0012bb346eed19b90d1b52ec23f37e175fe94046754239" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.339087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb","Type":"ContainerDied","Data":"40ca415939a7e1e94f0012bb346eed19b90d1b52ec23f37e175fe94046754239"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.353065 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsxbv" event={"ID":"4b09c921-bbfa-4934-9c8b-ffdd0f294481","Type":"ContainerStarted","Data":"58415422f504b1a03990485946a674ebdaa135d95c0a87217f037ff87e87d878"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.393750 4982 generic.go:334] "Generic (PLEG): container finished" podID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerID="39f9cc3dea990a44f6526e8d44212606fd2bd98975af302178a764d6956a1b61" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.394160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" event={"ID":"a358b847-122d-4a65-99b9-e21d6a8136aa","Type":"ContainerDied","Data":"39f9cc3dea990a44f6526e8d44212606fd2bd98975af302178a764d6956a1b61"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.401783 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-utilities\") pod \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.401921 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-log-ovn\") pod \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.401964 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-catalog-content\") pod \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.402041 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run\") pod \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.402072 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjcf4\" (UniqueName: \"kubernetes.io/projected/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-kube-api-access-xjcf4\") pod \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.402175 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-scripts\") pod \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.402194 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsplb\" (UniqueName: \"kubernetes.io/projected/16911749-9c58-48fb-8f10-e60d8c4dc8f2-kube-api-access-dsplb\") pod \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\" (UID: \"16911749-9c58-48fb-8f10-e60d8c4dc8f2\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.402239 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-ovn-controller-tls-certs\") pod \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.402294 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run-ovn\") pod \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.402364 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-combined-ca-bundle\") pod \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\" (UID: \"10c1d9ff-78da-4916-b01b-e4eb2935c4c3\") " Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.407923 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "10c1d9ff-78da-4916-b01b-e4eb2935c4c3" (UID: "10c1d9ff-78da-4916-b01b-e4eb2935c4c3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.409694 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run" (OuterVolumeSpecName: "var-run") pod "10c1d9ff-78da-4916-b01b-e4eb2935c4c3" (UID: "10c1d9ff-78da-4916-b01b-e4eb2935c4c3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.410159 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "10c1d9ff-78da-4916-b01b-e4eb2935c4c3" (UID: "10c1d9ff-78da-4916-b01b-e4eb2935c4c3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.410541 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-utilities" (OuterVolumeSpecName: "utilities") pod "16911749-9c58-48fb-8f10-e60d8c4dc8f2" (UID: "16911749-9c58-48fb-8f10-e60d8c4dc8f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.410912 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-scripts" (OuterVolumeSpecName: "scripts") pod "10c1d9ff-78da-4916-b01b-e4eb2935c4c3" (UID: "10c1d9ff-78da-4916-b01b-e4eb2935c4c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.424851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16911749-9c58-48fb-8f10-e60d8c4dc8f2-kube-api-access-dsplb" (OuterVolumeSpecName: "kube-api-access-dsplb") pod "16911749-9c58-48fb-8f10-e60d8c4dc8f2" (UID: "16911749-9c58-48fb-8f10-e60d8c4dc8f2"). InnerVolumeSpecName "kube-api-access-dsplb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.425098 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-kube-api-access-xjcf4" (OuterVolumeSpecName: "kube-api-access-xjcf4") pod "10c1d9ff-78da-4916-b01b-e4eb2935c4c3" (UID: "10c1d9ff-78da-4916-b01b-e4eb2935c4c3"). InnerVolumeSpecName "kube-api-access-xjcf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.432974 4982 generic.go:334] "Generic (PLEG): container finished" podID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerID="f7d962ecdf637e775a21dfd3f37e7d877a96731f2de9d97ae4e8bd5571888259" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.433079 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10c2a3b1-f4b9-4388-9261-fa703cfaf757","Type":"ContainerDied","Data":"f7d962ecdf637e775a21dfd3f37e7d877a96731f2de9d97ae4e8bd5571888259"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468114 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468164 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468172 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468178 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468185 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468194 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468220 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468227 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468234 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468241 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468248 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468255 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468263 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468330 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468384 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468398 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468407 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468416 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468423 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468433 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468468 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468479 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468487 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468496 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.468503 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.470940 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16911749-9c58-48fb-8f10-e60d8c4dc8f2" (UID: "16911749-9c58-48fb-8f10-e60d8c4dc8f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.475374 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10c1d9ff-78da-4916-b01b-e4eb2935c4c3" (UID: "10c1d9ff-78da-4916-b01b-e4eb2935c4c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.478488 4982 generic.go:334] "Generic (PLEG): container finished" podID="be86b806-f778-4826-8cfa-c29d366c9af5" containerID="74a7b42a19564dc748ab37edddb7525e4b1a173e9cc8b1eb1fb22abc814887a5" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.478565 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" event={"ID":"be86b806-f778-4826-8cfa-c29d366c9af5","Type":"ContainerDied","Data":"74a7b42a19564dc748ab37edddb7525e4b1a173e9cc8b1eb1fb22abc814887a5"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.498932 4982 generic.go:334] "Generic (PLEG): container finished" podID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" exitCode=0 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.499054 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerDied","Data":"52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506100 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506130 4982 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506140 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16911749-9c58-48fb-8f10-e60d8c4dc8f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506150 4982 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506158 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjcf4\" (UniqueName: \"kubernetes.io/projected/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-kube-api-access-xjcf4\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506166 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506173 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsplb\" (UniqueName: \"kubernetes.io/projected/16911749-9c58-48fb-8f10-e60d8c4dc8f2-kube-api-access-dsplb\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506181 4982 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.506189 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.506299 4982 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.506312 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.506323 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-696995d9b9-k2q7r: [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.506375 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift podName:740c5af1-ac3c-431a-ab6e-c5aecf933fea nodeName:}" failed. No retries permitted until 2026-01-23 08:20:49.506359651 +0000 UTC m=+1523.986723037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift") pod "swift-proxy-696995d9b9-k2q7r" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea") : [secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.540510 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-jvgnd"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.546997 4982 generic.go:334] "Generic (PLEG): container finished" podID="f2b7b481-7320-4d03-a230-a173c66dae36" containerID="44123210e26061b640cdcc4b3b9dc1005507bd1badfae509c93d9f329c1f4807" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.547313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7965657874-h2sgm" event={"ID":"f2b7b481-7320-4d03-a230-a173c66dae36","Type":"ContainerDied","Data":"44123210e26061b640cdcc4b3b9dc1005507bd1badfae509c93d9f329c1f4807"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.562344 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-jvgnd"] Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.582688 4982 generic.go:334] "Generic (PLEG): container finished" podID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerID="db343ad642c9cc837f707a4e56a8a8c708cc25fec21d7f114d380fb4ad3f1acf" exitCode=143 Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.582740 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6959c78bf5-trn2r" event={"ID":"c49f67d9-be69-448a-9f50-cf2801ed1528","Type":"ContainerDied","Data":"db343ad642c9cc837f707a4e56a8a8c708cc25fec21d7f114d380fb4ad3f1acf"} Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.706602 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "10c1d9ff-78da-4916-b01b-e4eb2935c4c3" (UID: "10c1d9ff-78da-4916-b01b-e4eb2935c4c3"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:48 crc kubenswrapper[4982]: I0123 08:20:48.754219 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c1d9ff-78da-4916-b01b-e4eb2935c4c3-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.754598 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.755323 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data podName:26d23dde-8aac-481b-bfdb-ce3315166bc4 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:52.755303879 +0000 UTC m=+1527.235667265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data") pod "rabbitmq-server-0" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4") : configmap "rabbitmq-config-data" not found Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.900858 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515 is running failed: container process not found" containerID="5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.907977 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515 is running failed: container process not found" containerID="5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.912029 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515 is running failed: container process not found" containerID="5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 23 08:20:48 crc kubenswrapper[4982]: E0123 08:20:48.912087 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="ovsdbserver-nb" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.006859 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_07620928-c400-47dd-b76d-25f31c3c1e0e/ovsdbserver-sb/0.log" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.007324 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.016852 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.029283 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.057243 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9/ovsdbserver-nb/0.log" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.057369 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.066961 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.131195 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.137581 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.141873 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.141958 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="ovn-northd" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.166535 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-kube-api-access-dxzb5\") pod \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.166597 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-metrics-certs-tls-certs\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.166669 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-scripts\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.166698 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdb-rundir\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.166745 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdb-rundir\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.166776 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-swift-storage-0\") pod \"a358b847-122d-4a65-99b9-e21d6a8136aa\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.167750 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt2ng\" (UniqueName: \"kubernetes.io/projected/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-kube-api-access-vt2ng\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.167806 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.167830 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-combined-ca-bundle\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.167874 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-svc\") pod \"a358b847-122d-4a65-99b9-e21d6a8136aa\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.167966 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-metrics-certs-tls-certs\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168014 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5j9m\" (UniqueName: \"kubernetes.io/projected/07620928-c400-47dd-b76d-25f31c3c1e0e-kube-api-access-k5j9m\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168048 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-nb\") pod \"a358b847-122d-4a65-99b9-e21d6a8136aa\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168069 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config-secret\") pod \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168092 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99xln\" (UniqueName: \"kubernetes.io/projected/4b09c921-bbfa-4934-9c8b-ffdd0f294481-kube-api-access-99xln\") pod \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168112 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-combined-ca-bundle\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168146 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgtqz\" (UniqueName: \"kubernetes.io/projected/a358b847-122d-4a65-99b9-e21d6a8136aa-kube-api-access-vgtqz\") pod \"a358b847-122d-4a65-99b9-e21d6a8136aa\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168175 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-config\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168213 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config\") pod \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168237 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-combined-ca-bundle\") pod \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\" (UID: \"ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168262 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-config\") pod \"a358b847-122d-4a65-99b9-e21d6a8136aa\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b09c921-bbfa-4934-9c8b-ffdd0f294481-operator-scripts\") pod \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\" (UID: \"4b09c921-bbfa-4934-9c8b-ffdd0f294481\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168312 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-config\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168344 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdbserver-nb-tls-certs\") pod \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\" (UID: \"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168376 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdbserver-sb-tls-certs\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168407 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-sb\") pod \"a358b847-122d-4a65-99b9-e21d6a8136aa\" (UID: \"a358b847-122d-4a65-99b9-e21d6a8136aa\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168435 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-scripts\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168462 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"07620928-c400-47dd-b76d-25f31c3c1e0e\" (UID: \"07620928-c400-47dd-b76d-25f31c3c1e0e\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.168729 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-scripts" (OuterVolumeSpecName: "scripts") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.169356 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b09c921-bbfa-4934-9c8b-ffdd0f294481-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b09c921-bbfa-4934-9c8b-ffdd0f294481" (UID: "4b09c921-bbfa-4934-9c8b-ffdd0f294481"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.170000 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-config" (OuterVolumeSpecName: "config") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.170879 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-config" (OuterVolumeSpecName: "config") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.171100 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.171172 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.171226 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data podName:0b4b8957-4a36-4d8b-a591-fdcbe696c808 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:53.171207085 +0000 UTC m=+1527.651570471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data") pod "rabbitmq-cell1-server-0" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808") : configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.171398 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.171417 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b09c921-bbfa-4934-9c8b-ffdd0f294481-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.171431 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.171443 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.171455 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.171787 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.180289 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-scripts" (OuterVolumeSpecName: "scripts") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.205244 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a358b847-122d-4a65-99b9-e21d6a8136aa-kube-api-access-vgtqz" (OuterVolumeSpecName: "kube-api-access-vgtqz") pod "a358b847-122d-4a65-99b9-e21d6a8136aa" (UID: "a358b847-122d-4a65-99b9-e21d6a8136aa"). InnerVolumeSpecName "kube-api-access-vgtqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.215989 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b09c921-bbfa-4934-9c8b-ffdd0f294481-kube-api-access-99xln" (OuterVolumeSpecName: "kube-api-access-99xln") pod "4b09c921-bbfa-4934-9c8b-ffdd0f294481" (UID: "4b09c921-bbfa-4934-9c8b-ffdd0f294481"). InnerVolumeSpecName "kube-api-access-99xln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.216194 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.231539 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-kube-api-access-vt2ng" (OuterVolumeSpecName: "kube-api-access-vt2ng") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "kube-api-access-vt2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.265289 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.273211 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgtqz\" (UniqueName: \"kubernetes.io/projected/a358b847-122d-4a65-99b9-e21d6a8136aa-kube-api-access-vgtqz\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.273241 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07620928-c400-47dd-b76d-25f31c3c1e0e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.273269 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.273278 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.273287 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt2ng\" (UniqueName: \"kubernetes.io/projected/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-kube-api-access-vt2ng\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.273299 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.273308 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99xln\" (UniqueName: \"kubernetes.io/projected/4b09c921-bbfa-4934-9c8b-ffdd0f294481-kube-api-access-99xln\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.276935 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07620928-c400-47dd-b76d-25f31c3c1e0e-kube-api-access-k5j9m" (OuterVolumeSpecName: "kube-api-access-k5j9m") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "kube-api-access-k5j9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.279352 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" (UID: "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.288907 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-kube-api-access-dxzb5" (OuterVolumeSpecName: "kube-api-access-dxzb5") pod "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" (UID: "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0"). InnerVolumeSpecName "kube-api-access-dxzb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.290498 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-696995d9b9-k2q7r"] Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.290741 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-696995d9b9-k2q7r" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-httpd" containerID="cri-o://f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f" gracePeriod=30 Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.290802 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-696995d9b9-k2q7r" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-server" containerID="cri-o://2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54" gracePeriod=30 Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.382756 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5j9m\" (UniqueName: \"kubernetes.io/projected/07620928-c400-47dd-b76d-25f31c3c1e0e-kube-api-access-k5j9m\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.382805 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.382816 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxzb5\" (UniqueName: \"kubernetes.io/projected/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-kube-api-access-dxzb5\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.454250 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.493802 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.494610 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.592750 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a358b847-122d-4a65-99b9-e21d6a8136aa" (UID: "a358b847-122d-4a65-99b9-e21d6a8136aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.595722 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-nova-novncproxy-tls-certs\") pod \"996235df-739a-4029-96df-88e45a44a38c\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.595829 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-combined-ca-bundle\") pod \"996235df-739a-4029-96df-88e45a44a38c\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.595884 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-vencrypt-tls-certs\") pod \"996235df-739a-4029-96df-88e45a44a38c\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.595985 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgx8m\" (UniqueName: \"kubernetes.io/projected/996235df-739a-4029-96df-88e45a44a38c-kube-api-access-cgx8m\") pod \"996235df-739a-4029-96df-88e45a44a38c\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.596057 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-config-data\") pod \"996235df-739a-4029-96df-88e45a44a38c\" (UID: \"996235df-739a-4029-96df-88e45a44a38c\") " Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.596516 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.596611 4982 projected.go:263] Couldn't get secret openstack/swift-proxy-config-data: secret "swift-proxy-config-data" not found Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.596642 4982 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.596652 4982 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.596663 4982 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-696995d9b9-k2q7r: [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.596708 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift podName:740c5af1-ac3c-431a-ab6e-c5aecf933fea nodeName:}" failed. No retries permitted until 2026-01-23 08:20:51.596693517 +0000 UTC m=+1526.077056903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift") pod "swift-proxy-696995d9b9-k2q7r" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea") : [secret "swift-proxy-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.635894 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996235df-739a-4029-96df-88e45a44a38c-kube-api-access-cgx8m" (OuterVolumeSpecName: "kube-api-access-cgx8m") pod "996235df-739a-4029-96df-88e45a44a38c" (UID: "996235df-739a-4029-96df-88e45a44a38c"). InnerVolumeSpecName "kube-api-access-cgx8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.645768 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336" exitCode=0 Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.645845 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.652491 4982 generic.go:334] "Generic (PLEG): container finished" podID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerID="2fdf936f82b58284aa5deb1d6981c7ab39aeaacc42c4318726f6a5011a3f326d" exitCode=0 Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.652573 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d81a8b1-9b52-44df-bc0b-7852d3b8570c","Type":"ContainerDied","Data":"2fdf936f82b58284aa5deb1d6981c7ab39aeaacc42c4318726f6a5011a3f326d"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.655529 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.657659 4982 generic.go:334] "Generic (PLEG): container finished" podID="be869517-a530-472f-b11e-59558aafc066" containerID="881b906ef1be3ae613692e976ccc55331837c84c9820bb020dc06fe11f437493" exitCode=0 Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.657735 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"be869517-a530-472f-b11e-59558aafc066","Type":"ContainerDied","Data":"881b906ef1be3ae613692e976ccc55331837c84c9820bb020dc06fe11f437493"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.657764 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"be869517-a530-472f-b11e-59558aafc066","Type":"ContainerDied","Data":"5498c06a56d02bf6b6b042242193dffd02d56d85dd54376723354b7be2bbd132"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.657776 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5498c06a56d02bf6b6b042242193dffd02d56d85dd54376723354b7be2bbd132" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.661315 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.661434 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9/ovsdbserver-nb/0.log" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.661541 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" (UID: "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.661586 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.661583 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9","Type":"ContainerDied","Data":"b45b6caf4606e9dc21976de349eaea574e1ab02f3b6d07e733659b2b61b2a564"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.661674 4982 scope.go:117] "RemoveContainer" containerID="2fd889b537634e373cd79c15112db577cb478310dae8eb3f23acd8930c050163" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.670584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xf498" event={"ID":"16911749-9c58-48fb-8f10-e60d8c4dc8f2","Type":"ContainerDied","Data":"8b10ee86255dc1a41fa32b13f781079158a2c81d6fc7cbb9938120aae20b3752"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.671456 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xf498" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.677713 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qdkmz" event={"ID":"10c1d9ff-78da-4916-b01b-e4eb2935c4c3","Type":"ContainerDied","Data":"5b6443d25d7b17ed816c3bd2b5a031639f52caf117cd1730e462a73088a7ecaa"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.678103 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qdkmz" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.678323 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.698611 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.698663 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.698677 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.698688 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.698701 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgx8m\" (UniqueName: \"kubernetes.io/projected/996235df-739a-4029-96df-88e45a44a38c-kube-api-access-cgx8m\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.698978 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" event={"ID":"a358b847-122d-4a65-99b9-e21d6a8136aa","Type":"ContainerDied","Data":"ae74c82bfe1f5971bbb740ea6c3b7cdbd538bafd66f324db244e06a5fc7cfbbc"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.699073 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-7xfzb" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.700941 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" (UID: "ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.704519 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-config-data" (OuterVolumeSpecName: "config-data") pod "996235df-739a-4029-96df-88e45a44a38c" (UID: "996235df-739a-4029-96df-88e45a44a38c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.709183 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_07620928-c400-47dd-b76d-25f31c3c1e0e/ovsdbserver-sb/0.log" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.709455 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"07620928-c400-47dd-b76d-25f31c3c1e0e","Type":"ContainerDied","Data":"a2f6674c26428112bac1ecc2c1cb34b7d666361ef915c832783348815ec10f52"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.709760 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.725234 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fsxbv" event={"ID":"4b09c921-bbfa-4934-9c8b-ffdd0f294481","Type":"ContainerDied","Data":"58415422f504b1a03990485946a674ebdaa135d95c0a87217f037ff87e87d878"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.725353 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fsxbv" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.727471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a358b847-122d-4a65-99b9-e21d6a8136aa" (UID: "a358b847-122d-4a65-99b9-e21d6a8136aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.727515 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a358b847-122d-4a65-99b9-e21d6a8136aa" (UID: "a358b847-122d-4a65-99b9-e21d6a8136aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.742464 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.742739 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-config" (OuterVolumeSpecName: "config") pod "a358b847-122d-4a65-99b9-e21d6a8136aa" (UID: "a358b847-122d-4a65-99b9-e21d6a8136aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.754163 4982 generic.go:334] "Generic (PLEG): container finished" podID="996235df-739a-4029-96df-88e45a44a38c" containerID="0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5" exitCode=0 Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.754231 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"996235df-739a-4029-96df-88e45a44a38c","Type":"ContainerDied","Data":"0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.754260 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"996235df-739a-4029-96df-88e45a44a38c","Type":"ContainerDied","Data":"0bca2e40080cbc834225bad3b25f0376de390c68077a740300879fdde557bb4a"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.754315 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.771402 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.784416 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "996235df-739a-4029-96df-88e45a44a38c" (UID: "996235df-739a-4029-96df-88e45a44a38c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.786580 4982 generic.go:334] "Generic (PLEG): container finished" podID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerID="f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f" exitCode=0 Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.786648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-696995d9b9-k2q7r" event={"ID":"740c5af1-ac3c-431a-ab6e-c5aecf933fea","Type":"ContainerDied","Data":"f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f"} Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.796976 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "996235df-739a-4029-96df-88e45a44a38c" (UID: "996235df-739a-4029-96df-88e45a44a38c"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: E0123 08:20:49.798022 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod740c5af1_ac3c_431a_ab6e_c5aecf933fea.slice/crio-conmon-f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f.scope\": RecentStats: unable to find data in memory cache]" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803564 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803594 4982 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803604 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803613 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803622 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803644 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803652 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.803661 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.830959 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a358b847-122d-4a65-99b9-e21d6a8136aa" (UID: "a358b847-122d-4a65-99b9-e21d6a8136aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.840223 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "07620928-c400-47dd-b76d-25f31c3c1e0e" (UID: "07620928-c400-47dd-b76d-25f31c3c1e0e"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.849468 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b0abeb-729f-49a9-8d9f-d931ec9f0814" path="/var/lib/kubelet/pods/95b0abeb-729f-49a9-8d9f-d931ec9f0814/volumes" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.850239 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ab1abc-5f0e-4139-8d32-06470f033cb0" path="/var/lib/kubelet/pods/a5ab1abc-5f0e-4139-8d32-06470f033cb0/volumes" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.850794 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55f5436-8054-4e06-9fc7-a43a1700ebd3" path="/var/lib/kubelet/pods/c55f5436-8054-4e06-9fc7-a43a1700ebd3/volumes" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.856282 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0" path="/var/lib/kubelet/pods/ebe29766-5ba4-4ed3-8dc0-49d9b7337ff0/volumes" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.906982 4982 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a358b847-122d-4a65-99b9-e21d6a8136aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.907039 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/07620928-c400-47dd-b76d-25f31c3c1e0e-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.908578 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.918052 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "996235df-739a-4029-96df-88e45a44a38c" (UID: "996235df-739a-4029-96df-88e45a44a38c"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:49 crc kubenswrapper[4982]: I0123 08:20:49.933412 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" (UID: "65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008372 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5x8nm"] Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008867 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.008888 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b0abeb-729f-49a9-8d9f-d931ec9f0814" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008896 4982 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/996235df-739a-4029-96df-88e45a44a38c-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008903 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b0abeb-729f-49a9-8d9f-d931ec9f0814" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008906 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.008911 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996235df-739a-4029-96df-88e45a44a38c" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008917 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="996235df-739a-4029-96df-88e45a44a38c" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.008934 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="extract-content" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008939 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="extract-content" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.008955 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="extract-utilities" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008960 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="extract-utilities" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.008971 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.008977 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.008995 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009001 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.009013 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerName="init" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009019 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerName="init" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.009029 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="registry-server" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009034 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="registry-server" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.009041 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009047 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.009069 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="ovsdbserver-nb" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009075 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="ovsdbserver-nb" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.009087 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerName="dnsmasq-dns" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009093 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerName="dnsmasq-dns" Jan 23 08:20:50 crc kubenswrapper[4982]: E0123 08:20:50.009103 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="ovsdbserver-sb" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009110 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="ovsdbserver-sb" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009336 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009344 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b0abeb-729f-49a9-8d9f-d931ec9f0814" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009357 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="ovsdbserver-sb" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009370 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" containerName="openstack-network-exporter" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009382 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" containerName="ovn-controller" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009390 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" containerName="dnsmasq-dns" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009400 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" containerName="registry-server" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009412 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" containerName="ovsdbserver-nb" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.009423 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="996235df-739a-4029-96df-88e45a44a38c" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.010021 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5x8nm"] Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.010099 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.012219 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.031487 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.045359 4982 scope.go:117] "RemoveContainer" containerID="5d9d9c5b1e75a886b0ba01f5c270829dc6f2a038baa60217c5b1b09824155515" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.109832 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-galera-tls-certs\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.109879 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-combined-ca-bundle\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.109998 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m4ck\" (UniqueName: \"kubernetes.io/projected/be869517-a530-472f-b11e-59558aafc066-kube-api-access-5m4ck\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.110049 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-operator-scripts\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.110080 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-config-data-default\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.110149 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-kolla-config\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.110177 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be869517-a530-472f-b11e-59558aafc066-config-data-generated\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.110326 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"be869517-a530-472f-b11e-59558aafc066\" (UID: \"be869517-a530-472f-b11e-59558aafc066\") " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.110682 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts\") pod \"root-account-create-update-5x8nm\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.110916 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.111335 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.111420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twhpv\" (UniqueName: \"kubernetes.io/projected/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-kube-api-access-twhpv\") pod \"root-account-create-update-5x8nm\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.111665 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.111684 4982 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.112017 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.112543 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be869517-a530-472f-b11e-59558aafc066-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.117395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be869517-a530-472f-b11e-59558aafc066-kube-api-access-5m4ck" (OuterVolumeSpecName: "kube-api-access-5m4ck") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "kube-api-access-5m4ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.129792 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.166415 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.206843 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "be869517-a530-472f-b11e-59558aafc066" (UID: "be869517-a530-472f-b11e-59558aafc066"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213058 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts\") pod \"root-account-create-update-5x8nm\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twhpv\" (UniqueName: \"kubernetes.io/projected/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-kube-api-access-twhpv\") pod \"root-account-create-update-5x8nm\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213344 4982 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213361 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be869517-a530-472f-b11e-59558aafc066-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213371 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m4ck\" (UniqueName: \"kubernetes.io/projected/be869517-a530-472f-b11e-59558aafc066-kube-api-access-5m4ck\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213383 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be869517-a530-472f-b11e-59558aafc066-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213392 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be869517-a530-472f-b11e-59558aafc066-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.213416 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.216507 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts\") pod \"root-account-create-update-5x8nm\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.236432 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twhpv\" (UniqueName: \"kubernetes.io/projected/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-kube-api-access-twhpv\") pod \"root-account-create-update-5x8nm\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.270923 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.316136 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.367753 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-ddf676c6b-tgrs7" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.180:8778/\": read tcp 10.217.0.2:54604->10.217.0.180:8778: read: connection reset by peer" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.368036 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-ddf676c6b-tgrs7" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.180:8778/\": read tcp 10.217.0.2:54608->10.217.0.180:8778: read: connection reset by peer" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.405876 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": read tcp 10.217.0.2:43200->10.217.0.232:8775: read: connection reset by peer" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.414168 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": read tcp 10.217.0.2:43194->10.217.0.232:8775: read: connection reset by peer" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.716150 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.723191 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-central-agent" containerID="cri-o://60990b5983a5ed6f357cfa2bc9fee6928a439ca6581002d82be773bb5dcd8535" gracePeriod=30 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.724536 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="proxy-httpd" containerID="cri-o://bc2badabfe3be71114926cfc4d259bac2c6e7c4fee0fe0967b92bbf9f88f6e08" gracePeriod=30 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.724604 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="sg-core" containerID="cri-o://bc50b6a7dac2bafbc66b790973c209bbad338f83281ec4c65bb5105ffb7b24d5" gracePeriod=30 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.724699 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-notification-agent" containerID="cri-o://6c840aeda273bccf85f7f5295d0123e2191ab16e2c3ce7f4c6ffeaffac682224" gracePeriod=30 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.844134 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.844411 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="938a15bf-06b9-40ed-89ac-ce3da6052837" containerName="kube-state-metrics" containerID="cri-o://954f7c6825edc3a6a04648b00c0271b4e702de967be419c5506096f1f886a150" gracePeriod=30 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.887012 4982 generic.go:334] "Generic (PLEG): container finished" podID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerID="f85206e6ea38cc3b96a676eb48cd6ab7fe8f14acdcd1691d0be85962d1d2a103" exitCode=0 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.887131 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a96cf7-6cad-496c-9aa6-523fce7af3ff","Type":"ContainerDied","Data":"f85206e6ea38cc3b96a676eb48cd6ab7fe8f14acdcd1691d0be85962d1d2a103"} Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.919400 4982 scope.go:117] "RemoveContainer" containerID="244fc34fcda1a8f51ba5027f68f3edab915eac4fd222494dabe55f97be2d3fd1" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.935656 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf498"] Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.952802 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.976399 4982 generic.go:334] "Generic (PLEG): container finished" podID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerID="3e9ea0f153b90fef5a0f09d7ff49a929afa3ab0bee839a8bf8bff02973243915" exitCode=0 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.976520 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf676c6b-tgrs7" event={"ID":"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b","Type":"ContainerDied","Data":"3e9ea0f153b90fef5a0f09d7ff49a929afa3ab0bee839a8bf8bff02973243915"} Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.979471 4982 generic.go:334] "Generic (PLEG): container finished" podID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerID="675dfaa3136aeef07453de25344bb073c61967ea34a6dc134fc80aa1017e0b49" exitCode=0 Jan 23 08:20:50 crc kubenswrapper[4982]: I0123 08:20:50.979550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10c2a3b1-f4b9-4388-9261-fa703cfaf757","Type":"ContainerDied","Data":"675dfaa3136aeef07453de25344bb073c61967ea34a6dc134fc80aa1017e0b49"} Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.053738 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.053844 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-log-httpd\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.053918 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-internal-tls-certs\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.053990 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-run-httpd\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.054059 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-public-tls-certs\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.054129 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-combined-ca-bundle\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.054150 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-config-data\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.054179 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4zl\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-kube-api-access-7z4zl\") pod \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\" (UID: \"740c5af1-ac3c-431a-ab6e-c5aecf933fea\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.063156 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.064001 4982 generic.go:334] "Generic (PLEG): container finished" podID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerID="2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54" exitCode=0 Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.064108 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.064531 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-696995d9b9-k2q7r" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.064608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-696995d9b9-k2q7r" event={"ID":"740c5af1-ac3c-431a-ab6e-c5aecf933fea","Type":"ContainerDied","Data":"2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54"} Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.064657 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-696995d9b9-k2q7r" event={"ID":"740c5af1-ac3c-431a-ab6e-c5aecf933fea","Type":"ContainerDied","Data":"e54958a4fd80d213360020d01d0a880a1d40f76fc6606c7f4bd348f4c29a1ca6"} Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.081377 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.095306 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.095350 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/740c5af1-ac3c-431a-ab6e-c5aecf933fea-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.103676 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xf498"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.129304 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-kube-api-access-7z4zl" (OuterVolumeSpecName: "kube-api-access-7z4zl") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "kube-api-access-7z4zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.138520 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.172352 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fsxbv"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.198333 4982 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.198369 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4zl\" (UniqueName: \"kubernetes.io/projected/740c5af1-ac3c-431a-ab6e-c5aecf933fea-kube-api-access-7z4zl\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.212850 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fsxbv"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.232554 4982 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-5x8nm" secret="" err="secret \"galera-openstack-dockercfg-9stb6\" not found" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.232602 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.246091 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qdkmz"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.267876 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7965657874-h2sgm" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.191:9311/healthcheck\": read tcp 10.217.0.2:54300->10.217.0.191:9311: read: connection reset by peer" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.268544 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7965657874-h2sgm" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.191:9311/healthcheck\": read tcp 10.217.0.2:54306->10.217.0.191:9311: read: connection reset by peer" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.278481 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qdkmz"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.300801 4982 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.300874 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts podName:9bdeef6c-2103-4eee-8728-81d9fb1cf3a1 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:51.800858563 +0000 UTC m=+1526.281221949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts") pod "root-account-create-update-5x8nm" (UID: "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1") : configmap "openstack-scripts" not found Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.310675 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.310949 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" containerName="memcached" containerID="cri-o://780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e" gracePeriod=30 Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.318594 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.323594 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3155-account-create-update-p7dt8"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.350708 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-config-data" (OuterVolumeSpecName: "config-data") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.352861 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3155-account-create-update-5l62h"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.353480 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be869517-a530-472f-b11e-59558aafc066" containerName="galera" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.353498 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be869517-a530-472f-b11e-59558aafc066" containerName="galera" Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.353542 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be869517-a530-472f-b11e-59558aafc066" containerName="mysql-bootstrap" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.353551 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be869517-a530-472f-b11e-59558aafc066" containerName="mysql-bootstrap" Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.353571 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-server" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.353580 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-server" Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.353599 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-httpd" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.353606 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-httpd" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.353886 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-httpd" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.353913 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="be869517-a530-472f-b11e-59558aafc066" containerName="galera" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.353928 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" containerName="proxy-server" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.354833 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.358539 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.403522 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.403558 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.407906 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3155-account-create-update-p7dt8"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.418696 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.421954 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "740c5af1-ac3c-431a-ab6e-c5aecf933fea" (UID: "740c5af1-ac3c-431a-ab6e-c5aecf933fea"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.423868 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4xcgz"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.431922 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3155-account-create-update-5l62h"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.446158 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4xcgz"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.451774 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5j9ml"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.464073 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7cd9887484-nrtvc"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.464483 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7cd9887484-nrtvc" podUID="b969cd53-c58b-4a97-89b8-8581874e1336" containerName="keystone-api" containerID="cri-o://99034c7805c03b0bb5d9d7d40da49977052e28b7160c0383014a800c8a076fbc" gracePeriod=30 Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.471271 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5j9ml"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.489857 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dgx4m"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.495907 4982 scope.go:117] "RemoveContainer" containerID="f3d20b18f910a32517f159b80557a4d4be51a59dc7342da886f6d9a6ed60c6d4" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.496033 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dgx4m"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.506219 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfjj\" (UniqueName: \"kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.506295 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.506389 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.506400 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/740c5af1-ac3c-431a-ab6e-c5aecf933fea-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.517689 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.567139 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3155-account-create-update-5l62h"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.567944 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-mtfjj operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-3155-account-create-update-5l62h" podUID="4d43e6d2-7430-4e07-bf0d-fe8b58842799" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.608308 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfjj\" (UniqueName: \"kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.608689 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.608781 4982 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.608842 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts podName:4d43e6d2-7430-4e07-bf0d-fe8b58842799 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:52.108827107 +0000 UTC m=+1526.589190493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts") pod "keystone-3155-account-create-update-5l62h" (UID: "4d43e6d2-7430-4e07-bf0d-fe8b58842799") : configmap "openstack-scripts" not found Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.622210 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.624996 4982 projected.go:194] Error preparing data for projected volume kube-api-access-mtfjj for pod openstack/keystone-3155-account-create-update-5l62h: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.625062 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj podName:4d43e6d2-7430-4e07-bf0d-fe8b58842799 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:52.12504067 +0000 UTC m=+1526.605404056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mtfjj" (UniqueName: "kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj") pod "keystone-3155-account-create-update-5l62h" (UID: "4d43e6d2-7430-4e07-bf0d-fe8b58842799") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.627400 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5x8nm"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.643773 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.656770 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.656845 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d4675771-9ce7-4bd7-a992-4358cbfdc2a0" containerName="nova-scheduler-scheduler" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.657163 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.682724 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.744401 4982 scope.go:117] "RemoveContainer" containerID="810ee3ae6127d10449b9bdf1eb435107a0333b102866f4edefef505018126638" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.744816 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.797388 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.815604 4982 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.815831 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts podName:9bdeef6c-2103-4eee-8728-81d9fb1cf3a1 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:52.815813594 +0000 UTC m=+1527.296176980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts") pod "root-account-create-update-5x8nm" (UID: "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1") : configmap "openstack-scripts" not found Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.820803 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.839161 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.193:8776/healthcheck\": dial tcp 10.217.0.193:8776: connect: connection refused" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.839876 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07620928-c400-47dd-b76d-25f31c3c1e0e" path="/var/lib/kubelet/pods/07620928-c400-47dd-b76d-25f31c3c1e0e/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.842216 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.844925 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c1d9ff-78da-4916-b01b-e4eb2935c4c3" path="/var/lib/kubelet/pods/10c1d9ff-78da-4916-b01b-e4eb2935c4c3/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.846063 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16911749-9c58-48fb-8f10-e60d8c4dc8f2" path="/var/lib/kubelet/pods/16911749-9c58-48fb-8f10-e60d8c4dc8f2/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.846930 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19b13975-a675-44f5-a248-e806fe3729fe" path="/var/lib/kubelet/pods/19b13975-a675-44f5-a248-e806fe3729fe/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.848212 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2714e6c8-2065-4423-8249-a5b382970a81" path="/var/lib/kubelet/pods/2714e6c8-2065-4423-8249-a5b382970a81/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.849103 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b09c921-bbfa-4934-9c8b-ffdd0f294481" path="/var/lib/kubelet/pods/4b09c921-bbfa-4934-9c8b-ffdd0f294481/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.849478 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5455d1ce-af24-4eb1-9195-88e721380c49" path="/var/lib/kubelet/pods/5455d1ce-af24-4eb1-9195-88e721380c49/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.850502 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be869517-a530-472f-b11e-59558aafc066" path="/var/lib/kubelet/pods/be869517-a530-472f-b11e-59558aafc066/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.851159 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d53cbcd7-b691-433f-a0fa-fcd319658c1b" path="/var/lib/kubelet/pods/d53cbcd7-b691-433f-a0fa-fcd319658c1b/volumes" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.857087 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.889465 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.897771 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="4f4741be-bf98-457a-9275-45405ac3f257" containerName="galera" containerID="cri-o://c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2" gracePeriod=30 Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.919171 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-config-data\") pod \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.919252 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-combined-ca-bundle\") pod \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.919302 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-nova-metadata-tls-certs\") pod \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.919383 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc7wf\" (UniqueName: \"kubernetes.io/projected/10c2a3b1-f4b9-4388-9261-fa703cfaf757-kube-api-access-sc7wf\") pod \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.919474 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c2a3b1-f4b9-4388-9261-fa703cfaf757-logs\") pod \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\" (UID: \"10c2a3b1-f4b9-4388-9261-fa703cfaf757\") " Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.922613 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.923461 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.929740 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.929929 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.936294 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.936455 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.936479 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.942129 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10c2a3b1-f4b9-4388-9261-fa703cfaf757-logs" (OuterVolumeSpecName: "logs") pod "10c2a3b1-f4b9-4388-9261-fa703cfaf757" (UID: "10c2a3b1-f4b9-4388-9261-fa703cfaf757"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.961754 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-7xfzb"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.962665 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c2a3b1-f4b9-4388-9261-fa703cfaf757-kube-api-access-sc7wf" (OuterVolumeSpecName: "kube-api-access-sc7wf") pod "10c2a3b1-f4b9-4388-9261-fa703cfaf757" (UID: "10c2a3b1-f4b9-4388-9261-fa703cfaf757"). InnerVolumeSpecName "kube-api-access-sc7wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.970242 4982 scope.go:117] "RemoveContainer" containerID="fd2fbb6efff76668106e4862758c234b2a497e8985b14592bd533bb9880add3e" Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.972111 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.972266 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.973674 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-7xfzb"] Jan 23 08:20:51 crc kubenswrapper[4982]: I0123 08:20:51.980534 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-config-data" (OuterVolumeSpecName: "config-data") pod "10c2a3b1-f4b9-4388-9261-fa703cfaf757" (UID: "10c2a3b1-f4b9-4388-9261-fa703cfaf757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.986706 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.988351 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.992699 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 08:20:51 crc kubenswrapper[4982]: E0123 08:20:51.992741 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="3ad26b54-da5c-40d5-a715-b47a4f59b4a7" containerName="nova-cell1-conductor-conductor" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.009074 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-696995d9b9-k2q7r"] Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.023102 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-696995d9b9-k2q7r"] Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.039751 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.039779 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc7wf\" (UniqueName: \"kubernetes.io/projected/10c2a3b1-f4b9-4388-9261-fa703cfaf757-kube-api-access-sc7wf\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.039789 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10c2a3b1-f4b9-4388-9261-fa703cfaf757-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.071836 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "10c2a3b1-f4b9-4388-9261-fa703cfaf757" (UID: "10c2a3b1-f4b9-4388-9261-fa703cfaf757"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.071901 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10c2a3b1-f4b9-4388-9261-fa703cfaf757" (UID: "10c2a3b1-f4b9-4388-9261-fa703cfaf757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.117188 4982 scope.go:117] "RemoveContainer" containerID="39f9cc3dea990a44f6526e8d44212606fd2bd98975af302178a764d6956a1b61" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.143434 4982 generic.go:334] "Generic (PLEG): container finished" podID="d4675771-9ce7-4bd7-a992-4358cbfdc2a0" containerID="0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f" exitCode=0 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.143549 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4675771-9ce7-4bd7-a992-4358cbfdc2a0","Type":"ContainerDied","Data":"0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.148474 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfjj\" (UniqueName: \"kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.148583 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.148751 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.148764 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10c2a3b1-f4b9-4388-9261-fa703cfaf757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.148883 4982 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.148951 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts podName:4d43e6d2-7430-4e07-bf0d-fe8b58842799 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:53.14893538 +0000 UTC m=+1527.629298766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts") pod "keystone-3155-account-create-update-5l62h" (UID: "4d43e6d2-7430-4e07-bf0d-fe8b58842799") : configmap "openstack-scripts" not found Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.152381 4982 generic.go:334] "Generic (PLEG): container finished" podID="497bf205-5985-411f-960e-2364563ae9e1" containerID="3cd84b06fc466f0e513e71745eb756df662bac487350768995821b496abc50d0" exitCode=0 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.152459 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"497bf205-5985-411f-960e-2364563ae9e1","Type":"ContainerDied","Data":"3cd84b06fc466f0e513e71745eb756df662bac487350768995821b496abc50d0"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.152866 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.153387 4982 projected.go:194] Error preparing data for projected volume kube-api-access-mtfjj for pod openstack/keystone-3155-account-create-update-5l62h: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.153470 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj podName:4d43e6d2-7430-4e07-bf0d-fe8b58842799 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:53.15344728 +0000 UTC m=+1527.633810666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mtfjj" (UniqueName: "kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj") pod "keystone-3155-account-create-update-5l62h" (UID: "4d43e6d2-7430-4e07-bf0d-fe8b58842799") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.155336 4982 generic.go:334] "Generic (PLEG): container finished" podID="f2b7b481-7320-4d03-a230-a173c66dae36" containerID="cf9c6471bb4e01bd9fd153522ac53f92eade7e364f44fd40abf8bdcad7b61af8" exitCode=0 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.155410 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7965657874-h2sgm" event={"ID":"f2b7b481-7320-4d03-a230-a173c66dae36","Type":"ContainerDied","Data":"cf9c6471bb4e01bd9fd153522ac53f92eade7e364f44fd40abf8bdcad7b61af8"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.170822 4982 generic.go:334] "Generic (PLEG): container finished" podID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerID="d7c245dddb72e897f40f47d481d743ead593b7d7c2dc1672b3d530d7eb076629" exitCode=0 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.170903 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d1bed17-dcb3-462e-8227-272c4c1fbc16","Type":"ContainerDied","Data":"d7c245dddb72e897f40f47d481d743ead593b7d7c2dc1672b3d530d7eb076629"} Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.190109 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.203777 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.206233 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.211733 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.211980 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="271fae4a-bd02-4281-a588-2784769ed552" containerName="nova-cell0-conductor-conductor" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.222407 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.248063 4982 generic.go:334] "Generic (PLEG): container finished" podID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerID="bc50b6a7dac2bafbc66b790973c209bbad338f83281ec4c65bb5105ffb7b24d5" exitCode=2 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.248090 4982 generic.go:334] "Generic (PLEG): container finished" podID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerID="60990b5983a5ed6f357cfa2bc9fee6928a439ca6581002d82be773bb5dcd8535" exitCode=0 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.248130 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerDied","Data":"bc50b6a7dac2bafbc66b790973c209bbad338f83281ec4c65bb5105ffb7b24d5"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.248155 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerDied","Data":"60990b5983a5ed6f357cfa2bc9fee6928a439ca6581002d82be773bb5dcd8535"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.248206 4982 scope.go:117] "RemoveContainer" containerID="dd88ed37f3a343261a9c049903d9492c70740351a511140530ad3a09ecab4c79" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251702 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-scripts\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251766 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-internal-tls-certs\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251808 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251866 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-config-data\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251894 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqxtw\" (UniqueName: \"kubernetes.io/projected/497bf205-5985-411f-960e-2364563ae9e1-kube-api-access-lqxtw\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251920 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251949 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-combined-ca-bundle\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.251975 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk7tf\" (UniqueName: \"kubernetes.io/projected/75a96cf7-6cad-496c-9aa6-523fce7af3ff-kube-api-access-qk7tf\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252026 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-logs\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252042 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-config-data\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252064 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-httpd-run\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252125 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-combined-ca-bundle\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252173 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-public-tls-certs\") pod \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\" (UID: \"75a96cf7-6cad-496c-9aa6-523fce7af3ff\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252200 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-scripts\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252241 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-logs\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252260 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-httpd-run\") pod \"497bf205-5985-411f-960e-2364563ae9e1\" (UID: \"497bf205-5985-411f-960e-2364563ae9e1\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.252816 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-logs" (OuterVolumeSpecName: "logs") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.253001 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.253286 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.258415 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.262141 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-logs" (OuterVolumeSpecName: "logs") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.277804 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"10c2a3b1-f4b9-4388-9261-fa703cfaf757","Type":"ContainerDied","Data":"43b1b00d8e65a1c42d0e08c00e6ffd4ffb634a0e8fde181a9084e037d64d933a"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.277938 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.290903 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerID="0b613fdca688bd3035970b7dc6b487ea56dac750dadeb82a2a1db42250f517a7" exitCode=0 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.291044 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb","Type":"ContainerDied","Data":"0b613fdca688bd3035970b7dc6b487ea56dac750dadeb82a2a1db42250f517a7"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.295267 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.297524 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-scripts" (OuterVolumeSpecName: "scripts") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.301227 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.301765 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.309852 4982 generic.go:334] "Generic (PLEG): container finished" podID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerID="d59b5dc7ed6c69bba6c7d620f3e44ff6159fc8950b7d39dcc371af91ae5c4782" exitCode=0 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.309950 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d81a8b1-9b52-44df-bc0b-7852d3b8570c","Type":"ContainerDied","Data":"d59b5dc7ed6c69bba6c7d620f3e44ff6159fc8950b7d39dcc371af91ae5c4782"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.310310 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a96cf7-6cad-496c-9aa6-523fce7af3ff-kube-api-access-qk7tf" (OuterVolumeSpecName: "kube-api-access-qk7tf") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "kube-api-access-qk7tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.319543 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-scripts" (OuterVolumeSpecName: "scripts") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.319952 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497bf205-5985-411f-960e-2364563ae9e1-kube-api-access-lqxtw" (OuterVolumeSpecName: "kube-api-access-lqxtw") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "kube-api-access-lqxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.320007 4982 generic.go:334] "Generic (PLEG): container finished" podID="938a15bf-06b9-40ed-89ac-ce3da6052837" containerID="954f7c6825edc3a6a04648b00c0271b4e702de967be419c5506096f1f886a150" exitCode=2 Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.320044 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"938a15bf-06b9-40ed-89ac-ce3da6052837","Type":"ContainerDied","Data":"954f7c6825edc3a6a04648b00c0271b4e702de967be419c5506096f1f886a150"} Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.324160 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.343887 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.347188 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.355375 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.355489 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-scripts\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.355549 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmnbw\" (UniqueName: \"kubernetes.io/projected/6d1bed17-dcb3-462e-8227-272c4c1fbc16-kube-api-access-hmnbw\") pod \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.355723 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-internal-tls-certs\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.356611 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-logs\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.356730 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-config-data\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.356783 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7xs4\" (UniqueName: \"kubernetes.io/projected/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-kube-api-access-k7xs4\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.356835 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1bed17-dcb3-462e-8227-272c4c1fbc16-logs\") pod \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.356903 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-internal-tls-certs\") pod \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.357275 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-public-tls-certs\") pod \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.357335 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-config-data\") pod \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.357380 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-combined-ca-bundle\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.357406 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-combined-ca-bundle\") pod \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\" (UID: \"6d1bed17-dcb3-462e-8227-272c4c1fbc16\") " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358063 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/75a96cf7-6cad-496c-9aa6-523fce7af3ff-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358083 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358094 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358105 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/497bf205-5985-411f-960e-2364563ae9e1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358116 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358140 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358151 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqxtw\" (UniqueName: \"kubernetes.io/projected/497bf205-5985-411f-960e-2364563ae9e1-kube-api-access-lqxtw\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358166 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.358178 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk7tf\" (UniqueName: \"kubernetes.io/projected/75a96cf7-6cad-496c-9aa6-523fce7af3ff-kube-api-access-qk7tf\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.359225 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.362087 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-logs" (OuterVolumeSpecName: "logs") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.365926 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d1bed17-dcb3-462e-8227-272c4c1fbc16-logs" (OuterVolumeSpecName: "logs") pod "6d1bed17-dcb3-462e-8227-272c4c1fbc16" (UID: "6d1bed17-dcb3-462e-8227-272c4c1fbc16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.384413 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-scripts" (OuterVolumeSpecName: "scripts") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.385595 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-kube-api-access-k7xs4" (OuterVolumeSpecName: "kube-api-access-k7xs4") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "kube-api-access-k7xs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.385666 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1bed17-dcb3-462e-8227-272c4c1fbc16-kube-api-access-hmnbw" (OuterVolumeSpecName: "kube-api-access-hmnbw") pod "6d1bed17-dcb3-462e-8227-272c4c1fbc16" (UID: "6d1bed17-dcb3-462e-8227-272c4c1fbc16"). InnerVolumeSpecName "kube-api-access-hmnbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.409298 4982 scope.go:117] "RemoveContainer" containerID="61f6f16773111fc5cd5315ce04a7454bb22686d399b1293d696bebdc325184ec" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.428563 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.438446 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.462101 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463641 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463661 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463671 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmnbw\" (UniqueName: \"kubernetes.io/projected/6d1bed17-dcb3-462e-8227-272c4c1fbc16-kube-api-access-hmnbw\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463683 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463692 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463725 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7xs4\" (UniqueName: \"kubernetes.io/projected/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-kube-api-access-k7xs4\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463733 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.463743 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6d1bed17-dcb3-462e-8227-272c4c1fbc16-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.537150 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.555091 4982 scope.go:117] "RemoveContainer" containerID="3cd9a32a141012cf4bba8863a14bb8c7f3122d23d541a734493f06ad3ca70f53" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.555347 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d1bed17-dcb3-462e-8227-272c4c1fbc16" (UID: "6d1bed17-dcb3-462e-8227-272c4c1fbc16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.567638 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.567668 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.615193 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-config-data" (OuterVolumeSpecName: "config-data") pod "6d1bed17-dcb3-462e-8227-272c4c1fbc16" (UID: "6d1bed17-dcb3-462e-8227-272c4c1fbc16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.615269 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6d1bed17-dcb3-462e-8227-272c4c1fbc16" (UID: "6d1bed17-dcb3-462e-8227-272c4c1fbc16"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.619385 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-config-data" (OuterVolumeSpecName: "config-data") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.640518 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-config-data" (OuterVolumeSpecName: "config-data") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.655718 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-config-data" (OuterVolumeSpecName: "config-data") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.670968 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.671009 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.671021 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.671033 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.671047 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.742050 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6d1bed17-dcb3-462e-8227-272c4c1fbc16" (UID: "6d1bed17-dcb3-462e-8227-272c4c1fbc16"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.745971 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75a96cf7-6cad-496c-9aa6-523fce7af3ff" (UID: "75a96cf7-6cad-496c-9aa6-523fce7af3ff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.769438 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.773035 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "497bf205-5985-411f-960e-2364563ae9e1" (UID: "497bf205-5985-411f-960e-2364563ae9e1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.773143 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.785818 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs\") pod \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\" (UID: \"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b\") " Jan 23 08:20:52 crc kubenswrapper[4982]: W0123 08:20:52.786350 4982 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b/volumes/kubernetes.io~secret/public-tls-certs Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.786719 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.786736 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.795896 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/497bf205-5985-411f-960e-2364563ae9e1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.795976 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data podName:26d23dde-8aac-481b-bfdb-ce3315166bc4 nodeName:}" failed. No retries permitted until 2026-01-23 08:21:00.795918147 +0000 UTC m=+1535.276281563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data") pod "rabbitmq-server-0" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4") : configmap "rabbitmq-config-data" not found Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.796028 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d1bed17-dcb3-462e-8227-272c4c1fbc16-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.796047 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.796061 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.796075 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75a96cf7-6cad-496c-9aa6-523fce7af3ff-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.857210 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" (UID: "e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.897988 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.898098 4982 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 23 08:20:52 crc kubenswrapper[4982]: E0123 08:20:52.898148 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts podName:9bdeef6c-2103-4eee-8728-81d9fb1cf3a1 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:54.898132836 +0000 UTC m=+1529.378496222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts") pod "root-account-create-update-5x8nm" (UID: "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1") : configmap "openstack-scripts" not found Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.935795 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.941871 4982 scope.go:117] "RemoveContainer" containerID="5681b28d1051a5f8acd2929e33682deb65b19b740cd2b6c2b2d8317baaa49110" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.951232 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.968579 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.975181 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:20:52 crc kubenswrapper[4982]: I0123 08:20:52.982398 4982 scope.go:117] "RemoveContainer" containerID="0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.003773 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-combined-ca-bundle\") pod \"938a15bf-06b9-40ed-89ac-ce3da6052837\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.004225 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snkv8\" (UniqueName: \"kubernetes.io/projected/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-api-access-snkv8\") pod \"938a15bf-06b9-40ed-89ac-ce3da6052837\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.004343 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-certs\") pod \"938a15bf-06b9-40ed-89ac-ce3da6052837\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.004448 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-config\") pod \"938a15bf-06b9-40ed-89ac-ce3da6052837\" (UID: \"938a15bf-06b9-40ed-89ac-ce3da6052837\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.010337 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.013250 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-api-access-snkv8" (OuterVolumeSpecName: "kube-api-access-snkv8") pod "938a15bf-06b9-40ed-89ac-ce3da6052837" (UID: "938a15bf-06b9-40ed-89ac-ce3da6052837"). InnerVolumeSpecName "kube-api-access-snkv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.018913 4982 scope.go:117] "RemoveContainer" containerID="0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5" Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.022312 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5\": container with ID starting with 0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5 not found: ID does not exist" containerID="0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.022381 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5"} err="failed to get container status \"0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5\": rpc error: code = NotFound desc = could not find container \"0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5\": container with ID starting with 0640eb8b0bb00937521827207e44e21efe22b2c8b02742ccb34ecf322164dcb5 not found: ID does not exist" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.022408 4982 scope.go:117] "RemoveContainer" containerID="2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.027953 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.056324 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "938a15bf-06b9-40ed-89ac-ce3da6052837" (UID: "938a15bf-06b9-40ed-89ac-ce3da6052837"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.065720 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "938a15bf-06b9-40ed-89ac-ce3da6052837" (UID: "938a15bf-06b9-40ed-89ac-ce3da6052837"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.069606 4982 scope.go:117] "RemoveContainer" containerID="f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.081377 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "938a15bf-06b9-40ed-89ac-ce3da6052837" (UID: "938a15bf-06b9-40ed-89ac-ce3da6052837"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.091419 4982 scope.go:117] "RemoveContainer" containerID="2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54" Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.091962 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54\": container with ID starting with 2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54 not found: ID does not exist" containerID="2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.092003 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54"} err="failed to get container status \"2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54\": rpc error: code = NotFound desc = could not find container \"2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54\": container with ID starting with 2b4ea6a5811ce14a0024a2fbd5705dd76ed831fea2dfb1f706ecd7e8dd1d6e54 not found: ID does not exist" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.092055 4982 scope.go:117] "RemoveContainer" containerID="f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f" Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.092476 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f\": container with ID starting with f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f not found: ID does not exist" containerID="f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.092503 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f"} err="failed to get container status \"f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f\": rpc error: code = NotFound desc = could not find container \"f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f\": container with ID starting with f44f87c6f96be8308cd453187d98e57e7c799316b1c0d24172753ffcfc49218f not found: ID does not exist" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.092522 4982 scope.go:117] "RemoveContainer" containerID="675dfaa3136aeef07453de25344bb073c61967ea34a6dc134fc80aa1017e0b49" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114101 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-internal-tls-certs\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kolla-config\") pod \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114195 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-combined-ca-bundle\") pod \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114219 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-etc-machine-id\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114257 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data-custom\") pod \"f2b7b481-7320-4d03-a230-a173c66dae36\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114296 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-combined-ca-bundle\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114331 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-config-data\") pod \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114354 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data\") pod \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114382 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data-custom\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114409 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-scripts\") pod \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114436 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-internal-tls-certs\") pod \"f2b7b481-7320-4d03-a230-a173c66dae36\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114485 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b7b481-7320-4d03-a230-a173c66dae36-logs\") pod \"f2b7b481-7320-4d03-a230-a173c66dae36\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114506 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-public-tls-certs\") pod \"f2b7b481-7320-4d03-a230-a173c66dae36\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114547 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-combined-ca-bundle\") pod \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114576 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-config-data\") pod \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114596 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-memcached-tls-certs\") pod \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114615 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114660 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-scripts\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114715 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zllhv\" (UniqueName: \"kubernetes.io/projected/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-kube-api-access-zllhv\") pod \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114739 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js4vs\" (UniqueName: \"kubernetes.io/projected/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-kube-api-access-js4vs\") pod \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\" (UID: \"d4675771-9ce7-4bd7-a992-4358cbfdc2a0\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114787 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-public-tls-certs\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114806 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data\") pod \"f2b7b481-7320-4d03-a230-a173c66dae36\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114830 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx7sl\" (UniqueName: \"kubernetes.io/projected/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kube-api-access-zx7sl\") pod \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114848 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data-custom\") pod \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114868 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-combined-ca-bundle\") pod \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\" (UID: \"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114892 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-combined-ca-bundle\") pod \"f2b7b481-7320-4d03-a230-a173c66dae36\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114912 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk96h\" (UniqueName: \"kubernetes.io/projected/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-kube-api-access-jk96h\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114946 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-etc-machine-id\") pod \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\" (UID: \"8d81a8b1-9b52-44df-bc0b-7852d3b8570c\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114966 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzdzf\" (UniqueName: \"kubernetes.io/projected/f2b7b481-7320-4d03-a230-a173c66dae36-kube-api-access-kzdzf\") pod \"f2b7b481-7320-4d03-a230-a173c66dae36\" (UID: \"f2b7b481-7320-4d03-a230-a173c66dae36\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.114990 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-logs\") pod \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\" (UID: \"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.115549 4982 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.115566 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.115578 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snkv8\" (UniqueName: \"kubernetes.io/projected/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-api-access-snkv8\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.115591 4982 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/938a15bf-06b9-40ed-89ac-ce3da6052837-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.115933 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.116669 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-config-data" (OuterVolumeSpecName: "config-data") pod "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" (UID: "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.116887 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" (UID: "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.123213 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b7b481-7320-4d03-a230-a173c66dae36-logs" (OuterVolumeSpecName: "logs") pod "f2b7b481-7320-4d03-a230-a173c66dae36" (UID: "f2b7b481-7320-4d03-a230-a173c66dae36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.123535 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-logs" (OuterVolumeSpecName: "logs") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.133740 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8d81a8b1-9b52-44df-bc0b-7852d3b8570c" (UID: "8d81a8b1-9b52-44df-bc0b-7852d3b8570c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.155383 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2b7b481-7320-4d03-a230-a173c66dae36" (UID: "f2b7b481-7320-4d03-a230-a173c66dae36"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.155687 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-kube-api-access-zllhv" (OuterVolumeSpecName: "kube-api-access-zllhv") pod "8d81a8b1-9b52-44df-bc0b-7852d3b8570c" (UID: "8d81a8b1-9b52-44df-bc0b-7852d3b8570c"). InnerVolumeSpecName "kube-api-access-zllhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.157380 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-kube-api-access-js4vs" (OuterVolumeSpecName: "kube-api-access-js4vs") pod "d4675771-9ce7-4bd7-a992-4358cbfdc2a0" (UID: "d4675771-9ce7-4bd7-a992-4358cbfdc2a0"). InnerVolumeSpecName "kube-api-access-js4vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.163904 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8d81a8b1-9b52-44df-bc0b-7852d3b8570c" (UID: "8d81a8b1-9b52-44df-bc0b-7852d3b8570c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.164004 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-scripts" (OuterVolumeSpecName: "scripts") pod "8d81a8b1-9b52-44df-bc0b-7852d3b8570c" (UID: "8d81a8b1-9b52-44df-bc0b-7852d3b8570c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.164109 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.164200 4982 scope.go:117] "RemoveContainer" containerID="f7d962ecdf637e775a21dfd3f37e7d877a96731f2de9d97ae4e8bd5571888259" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.166922 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-scripts" (OuterVolumeSpecName: "scripts") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.174141 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kube-api-access-zx7sl" (OuterVolumeSpecName: "kube-api-access-zx7sl") pod "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" (UID: "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16"). InnerVolumeSpecName "kube-api-access-zx7sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.175205 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-kube-api-access-jk96h" (OuterVolumeSpecName: "kube-api-access-jk96h") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "kube-api-access-jk96h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.175327 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b7b481-7320-4d03-a230-a173c66dae36-kube-api-access-kzdzf" (OuterVolumeSpecName: "kube-api-access-kzdzf") pod "f2b7b481-7320-4d03-a230-a173c66dae36" (UID: "f2b7b481-7320-4d03-a230-a173c66dae36"). InnerVolumeSpecName "kube-api-access-kzdzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.217935 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.218255 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfjj\" (UniqueName: \"kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj\") pod \"keystone-3155-account-create-update-5l62h\" (UID: \"4d43e6d2-7430-4e07-bf0d-fe8b58842799\") " pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.218282 4982 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.218359 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts podName:4d43e6d2-7430-4e07-bf0d-fe8b58842799 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:55.218340527 +0000 UTC m=+1529.698703913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts") pod "keystone-3155-account-create-update-5l62h" (UID: "4d43e6d2-7430-4e07-bf0d-fe8b58842799") : configmap "openstack-scripts" not found Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.218427 4982 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.218466 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data podName:0b4b8957-4a36-4d8b-a591-fdcbe696c808 nodeName:}" failed. No retries permitted until 2026-01-23 08:21:01.21845187 +0000 UTC m=+1535.698815256 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data") pod "rabbitmq-cell1-server-0" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808") : configmap "rabbitmq-cell1-config-data" not found Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.227968 4982 projected.go:194] Error preparing data for projected volume kube-api-access-mtfjj for pod openstack/keystone-3155-account-create-update-5l62h: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.228028 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj podName:4d43e6d2-7430-4e07-bf0d-fe8b58842799 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:55.228012885 +0000 UTC m=+1529.708376271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mtfjj" (UniqueName: "kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj") pod "keystone-3155-account-create-update-5l62h" (UID: "4d43e6d2-7430-4e07-bf0d-fe8b58842799") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228297 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228348 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zllhv\" (UniqueName: \"kubernetes.io/projected/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-kube-api-access-zllhv\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228364 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js4vs\" (UniqueName: \"kubernetes.io/projected/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-kube-api-access-js4vs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228378 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx7sl\" (UniqueName: \"kubernetes.io/projected/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kube-api-access-zx7sl\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228395 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228409 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk96h\" (UniqueName: \"kubernetes.io/projected/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-kube-api-access-jk96h\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228421 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzdzf\" (UniqueName: \"kubernetes.io/projected/f2b7b481-7320-4d03-a230-a173c66dae36-kube-api-access-kzdzf\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228438 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228473 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228487 4982 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228501 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228514 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228524 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228535 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228546 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b7b481-7320-4d03-a230-a173c66dae36-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.228560 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.242121 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-config-data" (OuterVolumeSpecName: "config-data") pod "d4675771-9ce7-4bd7-a992-4358cbfdc2a0" (UID: "d4675771-9ce7-4bd7-a992-4358cbfdc2a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.247843 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f2b7b481-7320-4d03-a230-a173c66dae36" (UID: "f2b7b481-7320-4d03-a230-a173c66dae36"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.274933 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.300597 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" (UID: "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.313664 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4675771-9ce7-4bd7-a992-4358cbfdc2a0" (UID: "d4675771-9ce7-4bd7-a992-4358cbfdc2a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.339508 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.339558 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.339573 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.339585 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4675771-9ce7-4bd7-a992-4358cbfdc2a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.341252 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.348134 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5x8nm"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.350436 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.350808 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.353274 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b7b481-7320-4d03-a230-a173c66dae36" (UID: "f2b7b481-7320-4d03-a230-a173c66dae36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.354136 4982 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 23 08:20:53 crc kubenswrapper[4982]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 23 08:20:53 crc kubenswrapper[4982]: Jan 23 08:20:53 crc kubenswrapper[4982]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 23 08:20:53 crc kubenswrapper[4982]: Jan 23 08:20:53 crc kubenswrapper[4982]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 23 08:20:53 crc kubenswrapper[4982]: Jan 23 08:20:53 crc kubenswrapper[4982]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 23 08:20:53 crc kubenswrapper[4982]: Jan 23 08:20:53 crc kubenswrapper[4982]: if [ -n "" ]; then Jan 23 08:20:53 crc kubenswrapper[4982]: GRANT_DATABASE="" Jan 23 08:20:53 crc kubenswrapper[4982]: else Jan 23 08:20:53 crc kubenswrapper[4982]: GRANT_DATABASE="*" Jan 23 08:20:53 crc kubenswrapper[4982]: fi Jan 23 08:20:53 crc kubenswrapper[4982]: Jan 23 08:20:53 crc kubenswrapper[4982]: # going for maximum compatibility here: Jan 23 08:20:53 crc kubenswrapper[4982]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 23 08:20:53 crc kubenswrapper[4982]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 23 08:20:53 crc kubenswrapper[4982]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 23 08:20:53 crc kubenswrapper[4982]: # support updates Jan 23 08:20:53 crc kubenswrapper[4982]: Jan 23 08:20:53 crc kubenswrapper[4982]: $MYSQL_CMD < logger="UnhandledError" Jan 23 08:20:53 crc kubenswrapper[4982]: E0123 08:20:53.355215 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-5x8nm" podUID="9bdeef6c-2103-4eee-8728-81d9fb1cf3a1" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.355281 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d4675771-9ce7-4bd7-a992-4358cbfdc2a0","Type":"ContainerDied","Data":"6da82b1aafa824208b62bb64ea4bde402ad931ab5bf29d7e00a3bbb34882fca8"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.355301 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.355339 4982 scope.go:117] "RemoveContainer" containerID="0f1c8d87f903524dedefa27a2115fc71aee084b65445e7b80d6ed93625ad043f" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.365781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6d1bed17-dcb3-462e-8227-272c4c1fbc16","Type":"ContainerDied","Data":"bd1ba21f47b2385268cce380c1fb4827dfe12ae51c2bfe3691f31eb561a11667"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.365820 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.371280 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d81a8b1-9b52-44df-bc0b-7852d3b8570c" (UID: "8d81a8b1-9b52-44df-bc0b-7852d3b8570c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.374781 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.374861 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8d81a8b1-9b52-44df-bc0b-7852d3b8570c","Type":"ContainerDied","Data":"5756f207c29b9bc1df2bf875ecdca6a0bee6d380afe7d063c01461743a4802d6"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.377515 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" (UID: "22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.383109 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"75a96cf7-6cad-496c-9aa6-523fce7af3ff","Type":"ContainerDied","Data":"0729eb8ee9dbced3b30dc5842096d3e6b61158bcb468ed8724b6c9f1b68a612f"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.383202 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.385488 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf676c6b-tgrs7" event={"ID":"e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b","Type":"ContainerDied","Data":"8affc164004ec0111199baa0e4455448645eebed1db16199d47ab54930681aeb"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.385609 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf676c6b-tgrs7" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.394708 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerDied","Data":"bc2badabfe3be71114926cfc4d259bac2c6e7c4fee0fe0967b92bbf9f88f6e08"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.395660 4982 generic.go:334] "Generic (PLEG): container finished" podID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerID="bc2badabfe3be71114926cfc4d259bac2c6e7c4fee0fe0967b92bbf9f88f6e08" exitCode=0 Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.407851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data" (OuterVolumeSpecName: "config-data") pod "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" (UID: "9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.413919 4982 scope.go:117] "RemoveContainer" containerID="d7c245dddb72e897f40f47d481d743ead593b7d7c2dc1672b3d530d7eb076629" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.417573 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7965657874-h2sgm" event={"ID":"f2b7b481-7320-4d03-a230-a173c66dae36","Type":"ContainerDied","Data":"4db0ea8fe1956c0c18296c433c11a6562155392ffb15777f4d99c6e42d090bb1"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.417857 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7965657874-h2sgm" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.426164 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"938a15bf-06b9-40ed-89ac-ce3da6052837","Type":"ContainerDied","Data":"e25640ae73a06f238acf092a470aaeb3829322c0343810452804dc175418111a"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.426307 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.432270 4982 generic.go:334] "Generic (PLEG): container finished" podID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" containerID="780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e" exitCode=0 Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.432361 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16","Type":"ContainerDied","Data":"780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.432397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16","Type":"ContainerDied","Data":"e2009cbf4ea4a3d1c3266e18593c6f43e6ae37eebc7b37e2fc039178b62bce04"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.432489 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.441518 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2fgw\" (UniqueName: \"kubernetes.io/projected/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-kube-api-access-w2fgw\") pod \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.441703 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-combined-ca-bundle\") pod \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.441788 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-config-data\") pod \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\" (UID: \"3ad26b54-da5c-40d5-a715-b47a4f59b4a7\") " Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.444593 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.444640 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.444653 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.444666 4982 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.444677 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.444691 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.444704 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.448777 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.456662 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data" (OuterVolumeSpecName: "config-data") pod "8d81a8b1-9b52-44df-bc0b-7852d3b8570c" (UID: "8d81a8b1-9b52-44df-bc0b-7852d3b8570c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.469801 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.482780 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.486890 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data" (OuterVolumeSpecName: "config-data") pod "f2b7b481-7320-4d03-a230-a173c66dae36" (UID: "f2b7b481-7320-4d03-a230-a173c66dae36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.487083 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-kube-api-access-w2fgw" (OuterVolumeSpecName: "kube-api-access-w2fgw") pod "3ad26b54-da5c-40d5-a715-b47a4f59b4a7" (UID: "3ad26b54-da5c-40d5-a715-b47a4f59b4a7"). InnerVolumeSpecName "kube-api-access-w2fgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.489756 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.505190 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f2b7b481-7320-4d03-a230-a173c66dae36" (UID: "f2b7b481-7320-4d03-a230-a173c66dae36"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.505773 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.505816 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ddf676c6b-tgrs7"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.505855 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"497bf205-5985-411f-960e-2364563ae9e1","Type":"ContainerDied","Data":"d724c0b597cff2bd1aeba031b86c5e81de3f3267b8d9c5a73da8c709b7c8d5fe"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.524156 4982 generic.go:334] "Generic (PLEG): container finished" podID="3ad26b54-da5c-40d5-a715-b47a4f59b4a7" containerID="4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" exitCode=0 Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.524321 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.525018 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-config-data" (OuterVolumeSpecName: "config-data") pod "3ad26b54-da5c-40d5-a715-b47a4f59b4a7" (UID: "3ad26b54-da5c-40d5-a715-b47a4f59b4a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.525198 4982 scope.go:117] "RemoveContainer" containerID="3a4d51e189b74fd0d7cce186ffa8581a0e4f9c7c3df73a5e73547594eddaa7d7" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.525273 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ddf676c6b-tgrs7"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.525317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ad26b54-da5c-40d5-a715-b47a4f59b4a7","Type":"ContainerDied","Data":"4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.525343 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3ad26b54-da5c-40d5-a715-b47a4f59b4a7","Type":"ContainerDied","Data":"84a8c262e8ab1b27aeff793a9e743ce29b057dcc337d54ad305a7c99b3abe23d"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.563753 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.564339 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.564369 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d81a8b1-9b52-44df-bc0b-7852d3b8570c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.564380 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.564396 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2fgw\" (UniqueName: \"kubernetes.io/projected/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-kube-api-access-w2fgw\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.564406 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b7b481-7320-4d03-a230-a173c66dae36-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.564070 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.564910 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb","Type":"ContainerDied","Data":"9e99d4ff06777ba54e1801d9213c9121fabcd2935a246d5e54b9a7f8ffbb75d2"} Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.577864 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3155-account-create-update-5l62h" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.589148 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ad26b54-da5c-40d5-a715-b47a4f59b4a7" (UID: "3ad26b54-da5c-40d5-a715-b47a4f59b4a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.618275 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.669683 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad26b54-da5c-40d5-a715-b47a4f59b4a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.671846 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.713864 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.838437 4982 scope.go:117] "RemoveContainer" containerID="2fdf936f82b58284aa5deb1d6981c7ab39aeaacc42c4318726f6a5011a3f326d" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.875003 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" path="/var/lib/kubelet/pods/10c2a3b1-f4b9-4388-9261-fa703cfaf757/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.888848 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" path="/var/lib/kubelet/pods/22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.890108 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9" path="/var/lib/kubelet/pods/65f8d5b9-4b00-4164-ae1a-3d3caffdc4f9/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.890761 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" path="/var/lib/kubelet/pods/6d1bed17-dcb3-462e-8227-272c4c1fbc16/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.891355 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740c5af1-ac3c-431a-ab6e-c5aecf933fea" path="/var/lib/kubelet/pods/740c5af1-ac3c-431a-ab6e-c5aecf933fea/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.894913 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" path="/var/lib/kubelet/pods/75a96cf7-6cad-496c-9aa6-523fce7af3ff/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.895553 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996235df-739a-4029-96df-88e45a44a38c" path="/var/lib/kubelet/pods/996235df-739a-4029-96df-88e45a44a38c/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.896850 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a358b847-122d-4a65-99b9-e21d6a8136aa" path="/var/lib/kubelet/pods/a358b847-122d-4a65-99b9-e21d6a8136aa/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.897579 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4675771-9ce7-4bd7-a992-4358cbfdc2a0" path="/var/lib/kubelet/pods/d4675771-9ce7-4bd7-a992-4358cbfdc2a0/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.914888 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" path="/var/lib/kubelet/pods/e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b/volumes" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.929144 4982 scope.go:117] "RemoveContainer" containerID="d59b5dc7ed6c69bba6c7d620f3e44ff6159fc8950b7d39dcc371af91ae5c4782" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.930517 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.930564 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.930581 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.958783 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.960084 4982 scope.go:117] "RemoveContainer" containerID="f85206e6ea38cc3b96a676eb48cd6ab7fe8f14acdcd1691d0be85962d1d2a103" Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.979817 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7965657874-h2sgm"] Jan 23 08:20:53 crc kubenswrapper[4982]: I0123 08:20:53.995257 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7965657874-h2sgm"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.008728 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.024256 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.035738 4982 scope.go:117] "RemoveContainer" containerID="2ae97d13aafab9b78c55d20b89d369229f390d7aed5aea78a7522b15b9f48310" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.045072 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3155-account-create-update-5l62h"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.060800 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3155-account-create-update-5l62h"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.086357 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.121617 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 08:20:54 crc kubenswrapper[4982]: E0123 08:20:54.127758 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 08:20:54 crc kubenswrapper[4982]: E0123 08:20:54.131049 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.131193 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 08:20:54 crc kubenswrapper[4982]: E0123 08:20:54.132544 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 08:20:54 crc kubenswrapper[4982]: E0123 08:20:54.132587 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="ovn-northd" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.140324 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.183201 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d43e6d2-7430-4e07-bf0d-fe8b58842799-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.183239 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtfjj\" (UniqueName: \"kubernetes.io/projected/4d43e6d2-7430-4e07-bf0d-fe8b58842799-kube-api-access-mtfjj\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.183298 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="996235df-739a-4029-96df-88e45a44a38c" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.226:6080/vnc_lite.html\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.202946 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.229949 4982 scope.go:117] "RemoveContainer" containerID="3e9ea0f153b90fef5a0f09d7ff49a929afa3ab0bee839a8bf8bff02973243915" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.280672 4982 scope.go:117] "RemoveContainer" containerID="aeab8722bcf476c6445e671d68cf2b7cfff2199e663f2a397789bae6c713a5ba" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.309717 4982 scope.go:117] "RemoveContainer" containerID="cf9c6471bb4e01bd9fd153522ac53f92eade7e364f44fd40abf8bdcad7b61af8" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386107 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386187 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26d23dde-8aac-481b-bfdb-ce3315166bc4-erlang-cookie-secret\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386244 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-plugins\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386268 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-erlang-cookie\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386399 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-plugins-conf\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386484 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386506 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26d23dde-8aac-481b-bfdb-ce3315166bc4-pod-info\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386571 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-tls\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386646 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-confd\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386677 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-server-conf\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386733 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfmrz\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-kube-api-access-lfmrz\") pod \"26d23dde-8aac-481b-bfdb-ce3315166bc4\" (UID: \"26d23dde-8aac-481b-bfdb-ce3315166bc4\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386963 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.386921 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.387709 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.387986 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.388007 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.388019 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.393168 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-kube-api-access-lfmrz" (OuterVolumeSpecName: "kube-api-access-lfmrz") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "kube-api-access-lfmrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.395016 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/26d23dde-8aac-481b-bfdb-ce3315166bc4-pod-info" (OuterVolumeSpecName: "pod-info") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.403818 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.404154 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.418837 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d23dde-8aac-481b-bfdb-ce3315166bc4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.469755 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data" (OuterVolumeSpecName: "config-data") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.469823 4982 scope.go:117] "RemoveContainer" containerID="44123210e26061b640cdcc4b3b9dc1005507bd1badfae509c93d9f329c1f4807" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.490442 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.490473 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfmrz\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-kube-api-access-lfmrz\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.490508 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.490517 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/26d23dde-8aac-481b-bfdb-ce3315166bc4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.490527 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.490536 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/26d23dde-8aac-481b-bfdb-ce3315166bc4-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.511387 4982 scope.go:117] "RemoveContainer" containerID="954f7c6825edc3a6a04648b00c0271b4e702de967be419c5506096f1f886a150" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.519786 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-server-conf" (OuterVolumeSpecName: "server-conf") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.527002 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.598519 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/26d23dde-8aac-481b-bfdb-ce3315166bc4-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.598547 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.671972 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.723336 4982 scope.go:117] "RemoveContainer" containerID="780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.728192 4982 generic.go:334] "Generic (PLEG): container finished" podID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerID="45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f" exitCode=0 Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.728253 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26d23dde-8aac-481b-bfdb-ce3315166bc4","Type":"ContainerDied","Data":"45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f"} Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.728278 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"26d23dde-8aac-481b-bfdb-ce3315166bc4","Type":"ContainerDied","Data":"966fb840755e0d2c7ff72877b3911b657707a9d8e4bf466fe4bf0577d059b908"} Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.728336 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.752762 4982 generic.go:334] "Generic (PLEG): container finished" podID="4f4741be-bf98-457a-9275-45405ac3f257" containerID="c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2" exitCode=0 Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.752832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4f4741be-bf98-457a-9275-45405ac3f257","Type":"ContainerDied","Data":"c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2"} Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.752862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"4f4741be-bf98-457a-9275-45405ac3f257","Type":"ContainerDied","Data":"50a143aee06109d77c7bc62d3104a6e558a0d8940a71451cda8fb11cf0afb120"} Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.752946 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.761226 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "26d23dde-8aac-481b-bfdb-ce3315166bc4" (UID: "26d23dde-8aac-481b-bfdb-ce3315166bc4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.767006 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9e81c71a-ceaf-4e4a-b875-da5f9abcdd06/ovn-northd/0.log" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.767058 4982 generic.go:334] "Generic (PLEG): container finished" podID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" exitCode=139 Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.767128 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06","Type":"ContainerDied","Data":"9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670"} Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.778519 4982 generic.go:334] "Generic (PLEG): container finished" podID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerID="2a7e5d12bd1b08f1897cad70a1c21dbddaaabae2025c8884bde0a2e110eb49ed" exitCode=0 Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.778593 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b4b8957-4a36-4d8b-a591-fdcbe696c808","Type":"ContainerDied","Data":"2a7e5d12bd1b08f1897cad70a1c21dbddaaabae2025c8884bde0a2e110eb49ed"} Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.783217 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5x8nm" event={"ID":"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1","Type":"ContainerStarted","Data":"cbe42b0a416630cd15219c85b11c015703ba725eacf8b70d48bb3bbb35f7deff"} Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806296 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-combined-ca-bundle\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806364 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4741be-bf98-457a-9275-45405ac3f257-config-data-generated\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806393 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-galera-tls-certs\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806426 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-config-data-default\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806472 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-kolla-config\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806517 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806552 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-operator-scripts\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.806689 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z7c2\" (UniqueName: \"kubernetes.io/projected/4f4741be-bf98-457a-9275-45405ac3f257-kube-api-access-2z7c2\") pod \"4f4741be-bf98-457a-9275-45405ac3f257\" (UID: \"4f4741be-bf98-457a-9275-45405ac3f257\") " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.807560 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.808021 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.808394 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/26d23dde-8aac-481b-bfdb-ce3315166bc4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.808407 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.808416 4982 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.809476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.812353 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4741be-bf98-457a-9275-45405ac3f257-kube-api-access-2z7c2" (OuterVolumeSpecName: "kube-api-access-2z7c2") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "kube-api-access-2z7c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.828864 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f4741be-bf98-457a-9275-45405ac3f257-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.837662 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.841262 4982 scope.go:117] "RemoveContainer" containerID="780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.849875 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: E0123 08:20:54.851536 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e\": container with ID starting with 780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e not found: ID does not exist" containerID="780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.851800 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e"} err="failed to get container status \"780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e\": rpc error: code = NotFound desc = could not find container \"780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e\": container with ID starting with 780f608758a6483e58bf25514c2239b929290296398d921843d06929f7eae14e not found: ID does not exist" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.851963 4982 scope.go:117] "RemoveContainer" containerID="3cd84b06fc466f0e513e71745eb756df662bac487350768995821b496abc50d0" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.869813 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.888878 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "4f4741be-bf98-457a-9275-45405ac3f257" (UID: "4f4741be-bf98-457a-9275-45405ac3f257"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.910019 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z7c2\" (UniqueName: \"kubernetes.io/projected/4f4741be-bf98-457a-9275-45405ac3f257-kube-api-access-2z7c2\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.910289 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.910373 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4f4741be-bf98-457a-9275-45405ac3f257-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.910450 4982 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f4741be-bf98-457a-9275-45405ac3f257-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: E0123 08:20:54.910509 4982 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 23 08:20:54 crc kubenswrapper[4982]: E0123 08:20:54.910578 4982 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts podName:9bdeef6c-2103-4eee-8728-81d9fb1cf3a1 nodeName:}" failed. No retries permitted until 2026-01-23 08:20:58.910558684 +0000 UTC m=+1533.390922070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts") pod "root-account-create-update-5x8nm" (UID: "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1") : configmap "openstack-scripts" not found Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.910713 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.910821 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4741be-bf98-457a-9275-45405ac3f257-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.938254 4982 scope.go:117] "RemoveContainer" containerID="98cf2cebef5f1b732d6a19c80cc61916d6f7fb654b98a4e5043b8c16409a7e5f" Jan 23 08:20:54 crc kubenswrapper[4982]: I0123 08:20:54.984905 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.008751 4982 scope.go:117] "RemoveContainer" containerID="4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.011993 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcw2n\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-kube-api-access-wcw2n\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012145 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-erlang-cookie\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012246 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-plugins-conf\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012354 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b4b8957-4a36-4d8b-a591-fdcbe696c808-pod-info\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012418 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b4b8957-4a36-4d8b-a591-fdcbe696c808-erlang-cookie-secret\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012646 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012739 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-confd\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-tls\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.012939 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-plugins\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.013024 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.013131 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-server-conf\") pod \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\" (UID: \"0b4b8957-4a36-4d8b-a591-fdcbe696c808\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.013304 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.013720 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.013788 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.014397 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.018336 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.019233 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-kube-api-access-wcw2n" (OuterVolumeSpecName: "kube-api-access-wcw2n") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "kube-api-access-wcw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.019762 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.022233 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0b4b8957-4a36-4d8b-a591-fdcbe696c808-pod-info" (OuterVolumeSpecName: "pod-info") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.024000 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.037393 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b4b8957-4a36-4d8b-a591-fdcbe696c808-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.078530 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data" (OuterVolumeSpecName: "config-data") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.088248 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-server-conf" (OuterVolumeSpecName: "server-conf") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120554 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120609 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b4b8957-4a36-4d8b-a591-fdcbe696c808-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120634 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b4b8957-4a36-4d8b-a591-fdcbe696c808-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120648 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120661 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120670 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120738 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120750 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b4b8957-4a36-4d8b-a591-fdcbe696c808-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.120761 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcw2n\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-kube-api-access-wcw2n\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.171991 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0b4b8957-4a36-4d8b-a591-fdcbe696c808" (UID: "0b4b8957-4a36-4d8b-a591-fdcbe696c808"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.172205 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.224568 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b4b8957-4a36-4d8b-a591-fdcbe696c808-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.225199 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.307108 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9e81c71a-ceaf-4e4a-b875-da5f9abcdd06/ovn-northd/0.log" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.307200 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.349017 4982 scope.go:117] "RemoveContainer" containerID="4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" Jan 23 08:20:55 crc kubenswrapper[4982]: E0123 08:20:55.355806 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079\": container with ID starting with 4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079 not found: ID does not exist" containerID="4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.355894 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079"} err="failed to get container status \"4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079\": rpc error: code = NotFound desc = could not find container \"4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079\": container with ID starting with 4d1c843e3b63e2c9db6e8e4f7ea2f6f493aa109f85b4071b17a9985576338079 not found: ID does not exist" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.355935 4982 scope.go:117] "RemoveContainer" containerID="0b613fdca688bd3035970b7dc6b487ea56dac750dadeb82a2a1db42250f517a7" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.403002 4982 scope.go:117] "RemoveContainer" containerID="40ca415939a7e1e94f0012bb346eed19b90d1b52ec23f37e175fe94046754239" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.404957 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.417383 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.420120 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.429317 4982 scope.go:117] "RemoveContainer" containerID="45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.432144 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.440366 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-rundir\") pod \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.440536 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-northd-tls-certs\") pod \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.440575 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-combined-ca-bundle\") pod \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.440679 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwm4t\" (UniqueName: \"kubernetes.io/projected/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-kube-api-access-zwm4t\") pod \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.440712 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-scripts\") pod \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.440750 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-config\") pod \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.440799 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-metrics-certs-tls-certs\") pod \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\" (UID: \"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.442263 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-scripts" (OuterVolumeSpecName: "scripts") pod "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" (UID: "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.442297 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" (UID: "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.442795 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-config" (OuterVolumeSpecName: "config") pod "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" (UID: "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.443923 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.448445 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-kube-api-access-zwm4t" (OuterVolumeSpecName: "kube-api-access-zwm4t") pod "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" (UID: "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"). InnerVolumeSpecName "kube-api-access-zwm4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.505005 4982 scope.go:117] "RemoveContainer" containerID="ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.542191 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twhpv\" (UniqueName: \"kubernetes.io/projected/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-kube-api-access-twhpv\") pod \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.542371 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts\") pod \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\" (UID: \"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.542816 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.542829 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwm4t\" (UniqueName: \"kubernetes.io/projected/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-kube-api-access-zwm4t\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.542841 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.542849 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.543203 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1" (UID: "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.566404 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-kube-api-access-twhpv" (OuterVolumeSpecName: "kube-api-access-twhpv") pod "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1" (UID: "9bdeef6c-2103-4eee-8728-81d9fb1cf3a1"). InnerVolumeSpecName "kube-api-access-twhpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.600117 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" (UID: "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.606566 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" (UID: "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.624663 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" (UID: "9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.644828 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.644885 4982 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.644899 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.644913 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twhpv\" (UniqueName: \"kubernetes.io/projected/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1-kube-api-access-twhpv\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.644927 4982 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.795999 4982 scope.go:117] "RemoveContainer" containerID="45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f" Jan 23 08:20:55 crc kubenswrapper[4982]: E0123 08:20:55.796344 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f\": container with ID starting with 45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f not found: ID does not exist" containerID="45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.796376 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f"} err="failed to get container status \"45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f\": rpc error: code = NotFound desc = could not find container \"45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f\": container with ID starting with 45ad057cd2f8436fcd226824cb6347c96215887ca29784996eccb16f5430ee7f not found: ID does not exist" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.796399 4982 scope.go:117] "RemoveContainer" containerID="ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef" Jan 23 08:20:55 crc kubenswrapper[4982]: E0123 08:20:55.796916 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef\": container with ID starting with ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef not found: ID does not exist" containerID="ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.796964 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef"} err="failed to get container status \"ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef\": rpc error: code = NotFound desc = could not find container \"ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef\": container with ID starting with ab104b25d361ec8b38aeb7c7e706c344c7f2d4461d73eca42de7642020b594ef not found: ID does not exist" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.796999 4982 scope.go:117] "RemoveContainer" containerID="c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.809170 4982 generic.go:334] "Generic (PLEG): container finished" podID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerID="6c840aeda273bccf85f7f5295d0123e2191ab16e2c3ce7f4c6ffeaffac682224" exitCode=0 Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.809593 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerDied","Data":"6c840aeda273bccf85f7f5295d0123e2191ab16e2c3ce7f4c6ffeaffac682224"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.809723 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"16e7b450-ab50-4a27-ae68-450c34eb2cba","Type":"ContainerDied","Data":"b02308e3b9f16694380f41f5b18021820fe2f4b45566f747b7cd766b3fc9afa5"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.809846 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b02308e3b9f16694380f41f5b18021820fe2f4b45566f747b7cd766b3fc9afa5" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.812464 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0b4b8957-4a36-4d8b-a591-fdcbe696c808","Type":"ContainerDied","Data":"882c198d7bc6a3775ed97b448804f1092ce6195bcc3449f5f693c80ddce3c567"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.812718 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.818476 4982 generic.go:334] "Generic (PLEG): container finished" podID="be86b806-f778-4826-8cfa-c29d366c9af5" containerID="121ac015dcf20bdb873bb6bfe8f528431aaf6cd7e2e7cfb1ebc3aff798e014ee" exitCode=0 Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.818501 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" event={"ID":"be86b806-f778-4826-8cfa-c29d366c9af5","Type":"ContainerDied","Data":"121ac015dcf20bdb873bb6bfe8f528431aaf6cd7e2e7cfb1ebc3aff798e014ee"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.821034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"271fae4a-bd02-4281-a588-2784769ed552","Type":"ContainerDied","Data":"6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.821016 4982 generic.go:334] "Generic (PLEG): container finished" podID="271fae4a-bd02-4281-a588-2784769ed552" containerID="6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278" exitCode=0 Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.822503 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5x8nm" event={"ID":"9bdeef6c-2103-4eee-8728-81d9fb1cf3a1","Type":"ContainerDied","Data":"cbe42b0a416630cd15219c85b11c015703ba725eacf8b70d48bb3bbb35f7deff"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.822560 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5x8nm" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.845857 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" path="/var/lib/kubelet/pods/26d23dde-8aac-481b-bfdb-ce3315166bc4/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.846526 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad26b54-da5c-40d5-a715-b47a4f59b4a7" path="/var/lib/kubelet/pods/3ad26b54-da5c-40d5-a715-b47a4f59b4a7/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.847353 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497bf205-5985-411f-960e-2364563ae9e1" path="/var/lib/kubelet/pods/497bf205-5985-411f-960e-2364563ae9e1/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.848458 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d43e6d2-7430-4e07-bf0d-fe8b58842799" path="/var/lib/kubelet/pods/4d43e6d2-7430-4e07-bf0d-fe8b58842799/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.848889 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4741be-bf98-457a-9275-45405ac3f257" path="/var/lib/kubelet/pods/4f4741be-bf98-457a-9275-45405ac3f257/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.849672 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" path="/var/lib/kubelet/pods/8d81a8b1-9b52-44df-bc0b-7852d3b8570c/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.849943 4982 scope.go:117] "RemoveContainer" containerID="6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.851063 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9e81c71a-ceaf-4e4a-b875-da5f9abcdd06/ovn-northd/0.log" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.851302 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.851501 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938a15bf-06b9-40ed-89ac-ce3da6052837" path="/var/lib/kubelet/pods/938a15bf-06b9-40ed-89ac-ce3da6052837/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.858165 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" path="/var/lib/kubelet/pods/9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.860036 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" path="/var/lib/kubelet/pods/f2b7b481-7320-4d03-a230-a173c66dae36/volumes" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.864283 4982 generic.go:334] "Generic (PLEG): container finished" podID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerID="bd096c9a64adc5e24222bd520105f18c192474be440d5f8370a222b01b9f8c41" exitCode=0 Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.865410 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.866696 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9e81c71a-ceaf-4e4a-b875-da5f9abcdd06","Type":"ContainerDied","Data":"90a78cc77e8c663f939a2330de37f397f34a1200d83d5d74765b29e1d020d9f2"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.866766 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6959c78bf5-trn2r" event={"ID":"c49f67d9-be69-448a-9f50-cf2801ed1528","Type":"ContainerDied","Data":"bd096c9a64adc5e24222bd520105f18c192474be440d5f8370a222b01b9f8c41"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.901381 4982 generic.go:334] "Generic (PLEG): container finished" podID="b969cd53-c58b-4a97-89b8-8581874e1336" containerID="99034c7805c03b0bb5d9d7d40da49977052e28b7160c0383014a800c8a076fbc" exitCode=0 Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.901456 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cd9887484-nrtvc" event={"ID":"b969cd53-c58b-4a97-89b8-8581874e1336","Type":"ContainerDied","Data":"99034c7805c03b0bb5d9d7d40da49977052e28b7160c0383014a800c8a076fbc"} Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.918208 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.926676 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.935646 4982 scope.go:117] "RemoveContainer" containerID="c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2" Jan 23 08:20:55 crc kubenswrapper[4982]: E0123 08:20:55.936295 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2\": container with ID starting with c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2 not found: ID does not exist" containerID="c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.936338 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2"} err="failed to get container status \"c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2\": rpc error: code = NotFound desc = could not find container \"c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2\": container with ID starting with c33183a0d35d9ec1f8b383b2cf1db5663a783321831cecacc329c53f2b5402b2 not found: ID does not exist" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.936370 4982 scope.go:117] "RemoveContainer" containerID="6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2" Jan 23 08:20:55 crc kubenswrapper[4982]: E0123 08:20:55.936906 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2\": container with ID starting with 6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2 not found: ID does not exist" containerID="6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.936955 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2"} err="failed to get container status \"6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2\": rpc error: code = NotFound desc = could not find container \"6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2\": container with ID starting with 6842928a45462c8c7b61712d9529549d357d59495509ca57c4a4fb2f26520bf2 not found: ID does not exist" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.936989 4982 scope.go:117] "RemoveContainer" containerID="2a7e5d12bd1b08f1897cad70a1c21dbddaaabae2025c8884bde0a2e110eb49ed" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.943758 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.960731 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-combined-ca-bundle\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.960862 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-run-httpd\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.960890 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-config-data\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.960923 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-sg-core-conf-yaml\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.961534 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.962703 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-ceilometer-tls-certs\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.962747 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-log-httpd\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.962808 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-scripts\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.962837 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v5l5\" (UniqueName: \"kubernetes.io/projected/16e7b450-ab50-4a27-ae68-450c34eb2cba-kube-api-access-2v5l5\") pod \"16e7b450-ab50-4a27-ae68-450c34eb2cba\" (UID: \"16e7b450-ab50-4a27-ae68-450c34eb2cba\") " Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.963368 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.965003 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.965088 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/16e7b450-ab50-4a27-ae68-450c34eb2cba-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.979754 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e7b450-ab50-4a27-ae68-450c34eb2cba-kube-api-access-2v5l5" (OuterVolumeSpecName: "kube-api-access-2v5l5") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "kube-api-access-2v5l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.980246 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-scripts" (OuterVolumeSpecName: "scripts") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:55 crc kubenswrapper[4982]: I0123 08:20:55.989858 4982 scope.go:117] "RemoveContainer" containerID="0d2e98831fddc20f7bb33d42b529158a780bb8345579dad163a63f455e992e0d" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.002474 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.008445 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5x8nm"] Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.014125 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5x8nm"] Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.061602 4982 scope.go:117] "RemoveContainer" containerID="1cdfe73af7e6305c3c12db7d9df0b35dbe1da5d1128309e49095c22ea2966020" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.066319 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8kc6\" (UniqueName: \"kubernetes.io/projected/c49f67d9-be69-448a-9f50-cf2801ed1528-kube-api-access-r8kc6\") pod \"c49f67d9-be69-448a-9f50-cf2801ed1528\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.066373 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49f67d9-be69-448a-9f50-cf2801ed1528-logs\") pod \"c49f67d9-be69-448a-9f50-cf2801ed1528\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.066486 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data\") pod \"c49f67d9-be69-448a-9f50-cf2801ed1528\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.066587 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data-custom\") pod \"c49f67d9-be69-448a-9f50-cf2801ed1528\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.066721 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-combined-ca-bundle\") pod \"c49f67d9-be69-448a-9f50-cf2801ed1528\" (UID: \"c49f67d9-be69-448a-9f50-cf2801ed1528\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.069127 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.069151 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v5l5\" (UniqueName: \"kubernetes.io/projected/16e7b450-ab50-4a27-ae68-450c34eb2cba-kube-api-access-2v5l5\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.069163 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.070910 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c49f67d9-be69-448a-9f50-cf2801ed1528-logs" (OuterVolumeSpecName: "logs") pod "c49f67d9-be69-448a-9f50-cf2801ed1528" (UID: "c49f67d9-be69-448a-9f50-cf2801ed1528"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.074721 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.075265 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c49f67d9-be69-448a-9f50-cf2801ed1528-kube-api-access-r8kc6" (OuterVolumeSpecName: "kube-api-access-r8kc6") pod "c49f67d9-be69-448a-9f50-cf2801ed1528" (UID: "c49f67d9-be69-448a-9f50-cf2801ed1528"). InnerVolumeSpecName "kube-api-access-r8kc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.078796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c49f67d9-be69-448a-9f50-cf2801ed1528" (UID: "c49f67d9-be69-448a-9f50-cf2801ed1528"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.101007 4982 scope.go:117] "RemoveContainer" containerID="9f5748335fad6472f81304fb476b2f1ea8b6fd54f22346220b571fe013e9c670" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.101796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.114597 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-config-data" (OuterVolumeSpecName: "config-data") pod "16e7b450-ab50-4a27-ae68-450c34eb2cba" (UID: "16e7b450-ab50-4a27-ae68-450c34eb2cba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.126965 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c49f67d9-be69-448a-9f50-cf2801ed1528" (UID: "c49f67d9-be69-448a-9f50-cf2801ed1528"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.132397 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.159078 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data" (OuterVolumeSpecName: "config-data") pod "c49f67d9-be69-448a-9f50-cf2801ed1528" (UID: "c49f67d9-be69-448a-9f50-cf2801ed1528"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170513 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170551 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170565 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8kc6\" (UniqueName: \"kubernetes.io/projected/c49f67d9-be69-448a-9f50-cf2801ed1528-kube-api-access-r8kc6\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170577 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c49f67d9-be69-448a-9f50-cf2801ed1528-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170593 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170605 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170635 4982 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/16e7b450-ab50-4a27-ae68-450c34eb2cba-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170646 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c49f67d9-be69-448a-9f50-cf2801ed1528-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.170571 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272166 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-config-data\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272204 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-internal-tls-certs\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-public-tls-certs\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272295 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-combined-ca-bundle\") pod \"be86b806-f778-4826-8cfa-c29d366c9af5\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272320 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data-custom\") pod \"be86b806-f778-4826-8cfa-c29d366c9af5\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272342 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-scripts\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272371 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt8vb\" (UniqueName: \"kubernetes.io/projected/be86b806-f778-4826-8cfa-c29d366c9af5-kube-api-access-dt8vb\") pod \"be86b806-f778-4826-8cfa-c29d366c9af5\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272409 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4nvs\" (UniqueName: \"kubernetes.io/projected/b969cd53-c58b-4a97-89b8-8581874e1336-kube-api-access-l4nvs\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272471 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be86b806-f778-4826-8cfa-c29d366c9af5-logs\") pod \"be86b806-f778-4826-8cfa-c29d366c9af5\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272524 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-combined-ca-bundle\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272555 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data\") pod \"be86b806-f778-4826-8cfa-c29d366c9af5\" (UID: \"be86b806-f778-4826-8cfa-c29d366c9af5\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272583 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-credential-keys\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.272614 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-fernet-keys\") pod \"b969cd53-c58b-4a97-89b8-8581874e1336\" (UID: \"b969cd53-c58b-4a97-89b8-8581874e1336\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.274414 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be86b806-f778-4826-8cfa-c29d366c9af5-logs" (OuterVolumeSpecName: "logs") pod "be86b806-f778-4826-8cfa-c29d366c9af5" (UID: "be86b806-f778-4826-8cfa-c29d366c9af5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.276782 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-scripts" (OuterVolumeSpecName: "scripts") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.276999 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.277507 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.277568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b969cd53-c58b-4a97-89b8-8581874e1336-kube-api-access-l4nvs" (OuterVolumeSpecName: "kube-api-access-l4nvs") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "kube-api-access-l4nvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.277510 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "be86b806-f778-4826-8cfa-c29d366c9af5" (UID: "be86b806-f778-4826-8cfa-c29d366c9af5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.282728 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be86b806-f778-4826-8cfa-c29d366c9af5-kube-api-access-dt8vb" (OuterVolumeSpecName: "kube-api-access-dt8vb") pod "be86b806-f778-4826-8cfa-c29d366c9af5" (UID: "be86b806-f778-4826-8cfa-c29d366c9af5"). InnerVolumeSpecName "kube-api-access-dt8vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.298202 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-config-data" (OuterVolumeSpecName: "config-data") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.306390 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.317932 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be86b806-f778-4826-8cfa-c29d366c9af5" (UID: "be86b806-f778-4826-8cfa-c29d366c9af5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.346232 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.346476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data" (OuterVolumeSpecName: "config-data") pod "be86b806-f778-4826-8cfa-c29d366c9af5" (UID: "be86b806-f778-4826-8cfa-c29d366c9af5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.356572 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b969cd53-c58b-4a97-89b8-8581874e1336" (UID: "b969cd53-c58b-4a97-89b8-8581874e1336"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374445 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374477 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374487 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374497 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374506 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt8vb\" (UniqueName: \"kubernetes.io/projected/be86b806-f778-4826-8cfa-c29d366c9af5-kube-api-access-dt8vb\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374516 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4nvs\" (UniqueName: \"kubernetes.io/projected/b969cd53-c58b-4a97-89b8-8581874e1336-kube-api-access-l4nvs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374526 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be86b806-f778-4826-8cfa-c29d366c9af5-logs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374535 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374544 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be86b806-f778-4826-8cfa-c29d366c9af5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374554 4982 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374563 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374573 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.374581 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b969cd53-c58b-4a97-89b8-8581874e1336-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.449851 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.579670 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-config-data\") pod \"271fae4a-bd02-4281-a588-2784769ed552\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.579769 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-combined-ca-bundle\") pod \"271fae4a-bd02-4281-a588-2784769ed552\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.579863 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr6rd\" (UniqueName: \"kubernetes.io/projected/271fae4a-bd02-4281-a588-2784769ed552-kube-api-access-rr6rd\") pod \"271fae4a-bd02-4281-a588-2784769ed552\" (UID: \"271fae4a-bd02-4281-a588-2784769ed552\") " Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.584410 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/271fae4a-bd02-4281-a588-2784769ed552-kube-api-access-rr6rd" (OuterVolumeSpecName: "kube-api-access-rr6rd") pod "271fae4a-bd02-4281-a588-2784769ed552" (UID: "271fae4a-bd02-4281-a588-2784769ed552"). InnerVolumeSpecName "kube-api-access-rr6rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.610424 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "271fae4a-bd02-4281-a588-2784769ed552" (UID: "271fae4a-bd02-4281-a588-2784769ed552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.620488 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-config-data" (OuterVolumeSpecName: "config-data") pod "271fae4a-bd02-4281-a588-2784769ed552" (UID: "271fae4a-bd02-4281-a588-2784769ed552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.682049 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.682101 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271fae4a-bd02-4281-a588-2784769ed552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.682117 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr6rd\" (UniqueName: \"kubernetes.io/projected/271fae4a-bd02-4281-a588-2784769ed552-kube-api-access-rr6rd\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.912257 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.912513 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.912745 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.912780 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.913595 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.914599 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.916360 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:20:56 crc kubenswrapper[4982]: E0123 08:20:56.916403 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.923800 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7cd9887484-nrtvc" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.923785 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7cd9887484-nrtvc" event={"ID":"b969cd53-c58b-4a97-89b8-8581874e1336","Type":"ContainerDied","Data":"27ef306718fb256397a2264a79fd54ec8498758f1f45d955d19b94fe60e74b81"} Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.923952 4982 scope.go:117] "RemoveContainer" containerID="99034c7805c03b0bb5d9d7d40da49977052e28b7160c0383014a800c8a076fbc" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.936199 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" event={"ID":"be86b806-f778-4826-8cfa-c29d366c9af5","Type":"ContainerDied","Data":"faadac4d00c6f41df194b210756886d2caa37bb189f8e5cdac68cffd6a9c3d86"} Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.936327 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.944842 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"271fae4a-bd02-4281-a588-2784769ed552","Type":"ContainerDied","Data":"cf1cdb5c5bd92db7ac7c59fbe9986f450464688f0f0ed2e108bc80e1cbaad779"} Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.945011 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.952169 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6959c78bf5-trn2r" event={"ID":"c49f67d9-be69-448a-9f50-cf2801ed1528","Type":"ContainerDied","Data":"732b3f44177f3c0955253216add36e0d7f86738276e441897aed76bd5024a3ab"} Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.952286 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6959c78bf5-trn2r" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.963026 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.968195 4982 scope.go:117] "RemoveContainer" containerID="121ac015dcf20bdb873bb6bfe8f528431aaf6cd7e2e7cfb1ebc3aff798e014ee" Jan 23 08:20:56 crc kubenswrapper[4982]: I0123 08:20:56.972607 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7cd9887484-nrtvc"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.013827 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7cd9887484-nrtvc"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.049780 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.052716 4982 scope.go:117] "RemoveContainer" containerID="74a7b42a19564dc748ab37edddb7525e4b1a173e9cc8b1eb1fb22abc814887a5" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.059431 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-7bf8bcfb96-bjw9x"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.066252 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6959c78bf5-trn2r"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.073910 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6959c78bf5-trn2r"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.075962 4982 scope.go:117] "RemoveContainer" containerID="6911d6b8620b29ec1403d6622d6d55329a3f11deb7024248a62fe0ecde175278" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.083413 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.092477 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.100877 4982 scope.go:117] "RemoveContainer" containerID="bd096c9a64adc5e24222bd520105f18c192474be440d5f8370a222b01b9f8c41" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.101191 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.108108 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.124189 4982 scope.go:117] "RemoveContainer" containerID="db343ad642c9cc837f707a4e56a8a8c708cc25fec21d7f114d380fb4ad3f1acf" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.838168 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" path="/var/lib/kubelet/pods/0b4b8957-4a36-4d8b-a591-fdcbe696c808/volumes" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.839179 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" path="/var/lib/kubelet/pods/16e7b450-ab50-4a27-ae68-450c34eb2cba/volumes" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.840555 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="271fae4a-bd02-4281-a588-2784769ed552" path="/var/lib/kubelet/pods/271fae4a-bd02-4281-a588-2784769ed552/volumes" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.841184 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bdeef6c-2103-4eee-8728-81d9fb1cf3a1" path="/var/lib/kubelet/pods/9bdeef6c-2103-4eee-8728-81d9fb1cf3a1/volumes" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.841603 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b969cd53-c58b-4a97-89b8-8581874e1336" path="/var/lib/kubelet/pods/b969cd53-c58b-4a97-89b8-8581874e1336/volumes" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.842206 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" path="/var/lib/kubelet/pods/be86b806-f778-4826-8cfa-c29d366c9af5/volumes" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.843345 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" path="/var/lib/kubelet/pods/c49f67d9-be69-448a-9f50-cf2801ed1528/volumes" Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.978306 4982 generic.go:334] "Generic (PLEG): container finished" podID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerID="af380f9803b5a0cc4d7580e7e89b720729e3c7d4ba623f631701258b3aa79272" exitCode=0 Jan 23 08:20:57 crc kubenswrapper[4982]: I0123 08:20:57.978383 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c478fb9-56vcd" event={"ID":"5448f946-ae45-4ef2-84fb-d848a335c81e","Type":"ContainerDied","Data":"af380f9803b5a0cc4d7580e7e89b720729e3c7d4ba623f631701258b3aa79272"} Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.209309 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.330551 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgrmn\" (UniqueName: \"kubernetes.io/projected/5448f946-ae45-4ef2-84fb-d848a335c81e-kube-api-access-mgrmn\") pod \"5448f946-ae45-4ef2-84fb-d848a335c81e\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.330619 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-ovndb-tls-certs\") pod \"5448f946-ae45-4ef2-84fb-d848a335c81e\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.330750 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-config\") pod \"5448f946-ae45-4ef2-84fb-d848a335c81e\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.330793 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-combined-ca-bundle\") pod \"5448f946-ae45-4ef2-84fb-d848a335c81e\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.330843 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-httpd-config\") pod \"5448f946-ae45-4ef2-84fb-d848a335c81e\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.330874 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-internal-tls-certs\") pod \"5448f946-ae45-4ef2-84fb-d848a335c81e\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.330894 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-public-tls-certs\") pod \"5448f946-ae45-4ef2-84fb-d848a335c81e\" (UID: \"5448f946-ae45-4ef2-84fb-d848a335c81e\") " Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.337910 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5448f946-ae45-4ef2-84fb-d848a335c81e" (UID: "5448f946-ae45-4ef2-84fb-d848a335c81e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.352486 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5448f946-ae45-4ef2-84fb-d848a335c81e-kube-api-access-mgrmn" (OuterVolumeSpecName: "kube-api-access-mgrmn") pod "5448f946-ae45-4ef2-84fb-d848a335c81e" (UID: "5448f946-ae45-4ef2-84fb-d848a335c81e"). InnerVolumeSpecName "kube-api-access-mgrmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.380458 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-config" (OuterVolumeSpecName: "config") pod "5448f946-ae45-4ef2-84fb-d848a335c81e" (UID: "5448f946-ae45-4ef2-84fb-d848a335c81e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.381998 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5448f946-ae45-4ef2-84fb-d848a335c81e" (UID: "5448f946-ae45-4ef2-84fb-d848a335c81e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.382575 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5448f946-ae45-4ef2-84fb-d848a335c81e" (UID: "5448f946-ae45-4ef2-84fb-d848a335c81e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.397476 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5448f946-ae45-4ef2-84fb-d848a335c81e" (UID: "5448f946-ae45-4ef2-84fb-d848a335c81e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.415659 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5448f946-ae45-4ef2-84fb-d848a335c81e" (UID: "5448f946-ae45-4ef2-84fb-d848a335c81e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.433420 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.433658 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.434258 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.434336 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.434403 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.434479 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgrmn\" (UniqueName: \"kubernetes.io/projected/5448f946-ae45-4ef2-84fb-d848a335c81e-kube-api-access-mgrmn\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.434548 4982 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5448f946-ae45-4ef2-84fb-d848a335c81e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.993078 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f7c478fb9-56vcd" event={"ID":"5448f946-ae45-4ef2-84fb-d848a335c81e","Type":"ContainerDied","Data":"3275681834a75fe5a667e386f502dc713649ea9cee55afc920468e7742a548b6"} Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.993381 4982 scope.go:117] "RemoveContainer" containerID="447acfd4c048d55e1f08058b4c91272ee9f4e8985a1b8beb44df17770342a816" Jan 23 08:20:58 crc kubenswrapper[4982]: I0123 08:20:58.993490 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f7c478fb9-56vcd" Jan 23 08:20:59 crc kubenswrapper[4982]: I0123 08:20:59.026914 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f7c478fb9-56vcd"] Jan 23 08:20:59 crc kubenswrapper[4982]: I0123 08:20:59.028370 4982 scope.go:117] "RemoveContainer" containerID="af380f9803b5a0cc4d7580e7e89b720729e3c7d4ba623f631701258b3aa79272" Jan 23 08:20:59 crc kubenswrapper[4982]: I0123 08:20:59.034403 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f7c478fb9-56vcd"] Jan 23 08:20:59 crc kubenswrapper[4982]: I0123 08:20:59.841392 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" path="/var/lib/kubelet/pods/5448f946-ae45-4ef2-84fb-d848a335c81e/volumes" Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.912856 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.913849 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.914127 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.914174 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.914205 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.916355 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.917447 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:01 crc kubenswrapper[4982]: E0123 08:21:01.917484 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.912059 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.913041 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.913703 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.913667 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.913786 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.915308 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.917454 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:06 crc kubenswrapper[4982]: E0123 08:21:06.917520 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:21:11 crc kubenswrapper[4982]: I0123 08:21:11.435724 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:21:11 crc kubenswrapper[4982]: I0123 08:21:11.436511 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.912842 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.913271 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.913613 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.913755 4982 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.914952 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.916845 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.918424 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 23 08:21:11 crc kubenswrapper[4982]: E0123 08:21:11.918470 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-6ksf4" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.132952 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6ksf4_3e0e89b0-a9ad-4104-a32e-0cc150d24c60/ovs-vswitchd/0.log" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.136135 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.187977 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6ksf4_3e0e89b0-a9ad-4104-a32e-0cc150d24c60/ovs-vswitchd/0.log" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.189429 4982 generic.go:334] "Generic (PLEG): container finished" podID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" exitCode=137 Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.189478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerDied","Data":"3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713"} Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.189509 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6ksf4" event={"ID":"3e0e89b0-a9ad-4104-a32e-0cc150d24c60","Type":"ContainerDied","Data":"f3386b3e60a8fe6d15a9130629fb69a8a35ab4085c054af6bee7be2a7402fd54"} Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.189535 4982 scope.go:117] "RemoveContainer" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.189737 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6ksf4" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.216316 4982 scope.go:117] "RemoveContainer" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.240312 4982 scope.go:117] "RemoveContainer" containerID="876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.248804 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-log\") pod \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.248855 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-scripts\") pod \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.248878 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-log" (OuterVolumeSpecName: "var-log") pod "3e0e89b0-a9ad-4104-a32e-0cc150d24c60" (UID: "3e0e89b0-a9ad-4104-a32e-0cc150d24c60"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.248947 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-etc-ovs\") pod \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.248967 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-lib\") pod \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.248977 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "3e0e89b0-a9ad-4104-a32e-0cc150d24c60" (UID: "3e0e89b0-a9ad-4104-a32e-0cc150d24c60"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249029 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27w4t\" (UniqueName: \"kubernetes.io/projected/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-kube-api-access-27w4t\") pod \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249089 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-run\") pod \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\" (UID: \"3e0e89b0-a9ad-4104-a32e-0cc150d24c60\") " Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249110 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-lib" (OuterVolumeSpecName: "var-lib") pod "3e0e89b0-a9ad-4104-a32e-0cc150d24c60" (UID: "3e0e89b0-a9ad-4104-a32e-0cc150d24c60"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249242 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-run" (OuterVolumeSpecName: "var-run") pod "3e0e89b0-a9ad-4104-a32e-0cc150d24c60" (UID: "3e0e89b0-a9ad-4104-a32e-0cc150d24c60"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249480 4982 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249500 4982 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249511 4982 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-lib\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.249521 4982 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.250056 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-scripts" (OuterVolumeSpecName: "scripts") pod "3e0e89b0-a9ad-4104-a32e-0cc150d24c60" (UID: "3e0e89b0-a9ad-4104-a32e-0cc150d24c60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.254272 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-kube-api-access-27w4t" (OuterVolumeSpecName: "kube-api-access-27w4t") pod "3e0e89b0-a9ad-4104-a32e-0cc150d24c60" (UID: "3e0e89b0-a9ad-4104-a32e-0cc150d24c60"). InnerVolumeSpecName "kube-api-access-27w4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.277310 4982 scope.go:117] "RemoveContainer" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" Jan 23 08:21:16 crc kubenswrapper[4982]: E0123 08:21:16.277879 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713\": container with ID starting with 3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713 not found: ID does not exist" containerID="3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.277924 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713"} err="failed to get container status \"3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713\": rpc error: code = NotFound desc = could not find container \"3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713\": container with ID starting with 3003c2c0d205cbb4c72b2ed43568d40f154f899d938c830ffff69e1a4f79d713 not found: ID does not exist" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.277952 4982 scope.go:117] "RemoveContainer" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" Jan 23 08:21:16 crc kubenswrapper[4982]: E0123 08:21:16.278308 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563\": container with ID starting with 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 not found: ID does not exist" containerID="52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.278328 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563"} err="failed to get container status \"52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563\": rpc error: code = NotFound desc = could not find container \"52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563\": container with ID starting with 52e619fe4760cc6c541656e3b0959c3e1648532e644c87fa413e21d8dca51563 not found: ID does not exist" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.278341 4982 scope.go:117] "RemoveContainer" containerID="876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855" Jan 23 08:21:16 crc kubenswrapper[4982]: E0123 08:21:16.278950 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855\": container with ID starting with 876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855 not found: ID does not exist" containerID="876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.279002 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855"} err="failed to get container status \"876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855\": rpc error: code = NotFound desc = could not find container \"876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855\": container with ID starting with 876643e606475321f3bb8787865616e96af60b7ff98db8467183815d25a45855 not found: ID does not exist" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.351670 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.351733 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27w4t\" (UniqueName: \"kubernetes.io/projected/3e0e89b0-a9ad-4104-a32e-0cc150d24c60-kube-api-access-27w4t\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.527986 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-6ksf4"] Jan 23 08:21:16 crc kubenswrapper[4982]: I0123 08:21:16.535879 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-6ksf4"] Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.685198 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.838385 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" path="/var/lib/kubelet/pods/3e0e89b0-a9ad-4104-a32e-0cc150d24c60/volumes" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.876989 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-lock\") pod \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.877055 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.877091 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") pod \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.877215 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-combined-ca-bundle\") pod \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.877235 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-cache\") pod \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.877295 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmxpc\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-kube-api-access-mmxpc\") pod \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\" (UID: \"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1\") " Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.877971 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-lock" (OuterVolumeSpecName: "lock") pod "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.878276 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-cache" (OuterVolumeSpecName: "cache") pod "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.882652 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.882676 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.883017 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-kube-api-access-mmxpc" (OuterVolumeSpecName: "kube-api-access-mmxpc") pod "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1"). InnerVolumeSpecName "kube-api-access-mmxpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.979660 4982 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-lock\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.979949 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.980039 4982 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.980143 4982 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-cache\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:17 crc kubenswrapper[4982]: I0123 08:21:17.980232 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmxpc\" (UniqueName: \"kubernetes.io/projected/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-kube-api-access-mmxpc\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.000388 4982 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.082354 4982 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.218381 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" (UID: "5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.225032 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerID="54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1" exitCode=137 Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.225094 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1"} Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.225133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1","Type":"ContainerDied","Data":"de9202c698b3916fb1d2512866e4d9733518738e6f27d5f5fa8a3887d9d5db6f"} Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.225156 4982 scope.go:117] "RemoveContainer" containerID="54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.225355 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.283325 4982 scope.go:117] "RemoveContainer" containerID="79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.299379 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.306902 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.317046 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.331274 4982 scope.go:117] "RemoveContainer" containerID="de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.360005 4982 scope.go:117] "RemoveContainer" containerID="0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.384517 4982 scope.go:117] "RemoveContainer" containerID="d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.406307 4982 scope.go:117] "RemoveContainer" containerID="f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.432038 4982 scope.go:117] "RemoveContainer" containerID="674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.456172 4982 scope.go:117] "RemoveContainer" containerID="4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.478360 4982 scope.go:117] "RemoveContainer" containerID="2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.502149 4982 scope.go:117] "RemoveContainer" containerID="7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.528200 4982 scope.go:117] "RemoveContainer" containerID="3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.553270 4982 scope.go:117] "RemoveContainer" containerID="49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.591512 4982 scope.go:117] "RemoveContainer" containerID="16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.620138 4982 scope.go:117] "RemoveContainer" containerID="18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.643119 4982 scope.go:117] "RemoveContainer" containerID="54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.664965 4982 scope.go:117] "RemoveContainer" containerID="54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.670306 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1\": container with ID starting with 54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1 not found: ID does not exist" containerID="54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.670385 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1"} err="failed to get container status \"54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1\": rpc error: code = NotFound desc = could not find container \"54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1\": container with ID starting with 54f1e9b014403e7dccd44b188ea7111e6239983d75a3ed601b4bbb26709679b1 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.670443 4982 scope.go:117] "RemoveContainer" containerID="79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.671076 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564\": container with ID starting with 79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564 not found: ID does not exist" containerID="79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.671124 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564"} err="failed to get container status \"79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564\": rpc error: code = NotFound desc = could not find container \"79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564\": container with ID starting with 79a81a28186bc3a26ea8d019aa3acd54c1f78e836558ad9302b4328876f4e564 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.671159 4982 scope.go:117] "RemoveContainer" containerID="de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.671480 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973\": container with ID starting with de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973 not found: ID does not exist" containerID="de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.671499 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973"} err="failed to get container status \"de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973\": rpc error: code = NotFound desc = could not find container \"de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973\": container with ID starting with de44d1af71cb4acbf2718da1188bf90677423f206aa2a96c8a371c1f443e4973 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.671514 4982 scope.go:117] "RemoveContainer" containerID="0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.671988 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653\": container with ID starting with 0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653 not found: ID does not exist" containerID="0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.672003 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653"} err="failed to get container status \"0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653\": rpc error: code = NotFound desc = could not find container \"0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653\": container with ID starting with 0a827beb6fa1458f5fea18a1af92096bb618d8ef7a664affd6c27ed5edbef653 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.672016 4982 scope.go:117] "RemoveContainer" containerID="d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.672524 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86\": container with ID starting with d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86 not found: ID does not exist" containerID="d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.672553 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86"} err="failed to get container status \"d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86\": rpc error: code = NotFound desc = could not find container \"d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86\": container with ID starting with d513e6f0db9cd6c45d132098b49437b69cca263acb0e7776120945236bf5af86 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.672571 4982 scope.go:117] "RemoveContainer" containerID="f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.672922 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1\": container with ID starting with f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1 not found: ID does not exist" containerID="f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.672957 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1"} err="failed to get container status \"f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1\": rpc error: code = NotFound desc = could not find container \"f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1\": container with ID starting with f18be04275eaf9c29a587d156d72f4c02ce19120fdb343067a05af4a948e0dd1 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.672974 4982 scope.go:117] "RemoveContainer" containerID="674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.673396 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7\": container with ID starting with 674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7 not found: ID does not exist" containerID="674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.673417 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7"} err="failed to get container status \"674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7\": rpc error: code = NotFound desc = could not find container \"674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7\": container with ID starting with 674c888f17cc2afcdc3f840e89e8294d77d6c84642716e160303cb7c2135c9c7 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.673431 4982 scope.go:117] "RemoveContainer" containerID="4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.673693 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2\": container with ID starting with 4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2 not found: ID does not exist" containerID="4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.673715 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2"} err="failed to get container status \"4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2\": rpc error: code = NotFound desc = could not find container \"4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2\": container with ID starting with 4d8f652177b53d68fc2fea452d63f0525e955055edacae0328cf2c58663e51b2 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.673737 4982 scope.go:117] "RemoveContainer" containerID="2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.674303 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505\": container with ID starting with 2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505 not found: ID does not exist" containerID="2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.674330 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505"} err="failed to get container status \"2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505\": rpc error: code = NotFound desc = could not find container \"2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505\": container with ID starting with 2e662227e298f8f93abd9b4bf5c867d1066ce32ead965b9e7bb750d906c87505 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.674348 4982 scope.go:117] "RemoveContainer" containerID="7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.675840 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05\": container with ID starting with 7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05 not found: ID does not exist" containerID="7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.675869 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05"} err="failed to get container status \"7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05\": rpc error: code = NotFound desc = could not find container \"7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05\": container with ID starting with 7ef781e7e6272eda7f5ccead8b8a12aa0a3197f7bf5a5edd1a3ac886e59d3c05 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.675890 4982 scope.go:117] "RemoveContainer" containerID="3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.676215 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336\": container with ID starting with 3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336 not found: ID does not exist" containerID="3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.676245 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336"} err="failed to get container status \"3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336\": rpc error: code = NotFound desc = could not find container \"3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336\": container with ID starting with 3035f5a90022800bdb8e8e05064b8af2d294337b8a45a64b7a6ea384b18c2336 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.676263 4982 scope.go:117] "RemoveContainer" containerID="49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.676527 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e\": container with ID starting with 49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e not found: ID does not exist" containerID="49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.676547 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e"} err="failed to get container status \"49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e\": rpc error: code = NotFound desc = could not find container \"49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e\": container with ID starting with 49799742e08f8dd56b818e009d3ad53d7f3d59457d34f0b186094e3e851a015e not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.676564 4982 scope.go:117] "RemoveContainer" containerID="16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.676810 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440\": container with ID starting with 16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440 not found: ID does not exist" containerID="16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.676834 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440"} err="failed to get container status \"16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440\": rpc error: code = NotFound desc = could not find container \"16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440\": container with ID starting with 16c78f2e829e4386358b2ee1965f6da0cd2056a464cd7e3bd2c7d90d5d63a440 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.676850 4982 scope.go:117] "RemoveContainer" containerID="18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.677153 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290\": container with ID starting with 18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290 not found: ID does not exist" containerID="18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.677177 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290"} err="failed to get container status \"18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290\": rpc error: code = NotFound desc = could not find container \"18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290\": container with ID starting with 18ad6b7e3a9a80353cfdaf7099297cf9a359b2c47e2cdb8e2f68c8b51c11f290 not found: ID does not exist" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.677194 4982 scope.go:117] "RemoveContainer" containerID="54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed" Jan 23 08:21:18 crc kubenswrapper[4982]: E0123 08:21:18.677451 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed\": container with ID starting with 54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed not found: ID does not exist" containerID="54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed" Jan 23 08:21:18 crc kubenswrapper[4982]: I0123 08:21:18.677472 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed"} err="failed to get container status \"54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed\": rpc error: code = NotFound desc = could not find container \"54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed\": container with ID starting with 54e4764e604c4c9c4d997a9236cd3a18159fec15f3f2f79215e7aab36d3cf6ed not found: ID does not exist" Jan 23 08:21:19 crc kubenswrapper[4982]: I0123 08:21:19.836795 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" path="/var/lib/kubelet/pods/5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1/volumes" Jan 23 08:21:25 crc kubenswrapper[4982]: I0123 08:21:25.914881 4982 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9e81c71a-ceaf-4e4a-b875-da5f9abcdd06"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9e81c71a-ceaf-4e4a-b875-da5f9abcdd06] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9e81c71a_ceaf_4e4a_b875_da5f9abcdd06.slice" Jan 23 08:21:25 crc kubenswrapper[4982]: E0123 08:21:25.915562 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod9e81c71a-ceaf-4e4a-b875-da5f9abcdd06] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod9e81c71a-ceaf-4e4a-b875-da5f9abcdd06] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9e81c71a_ceaf_4e4a_b875_da5f9abcdd06.slice" pod="openstack/ovn-northd-0" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" Jan 23 08:21:26 crc kubenswrapper[4982]: I0123 08:21:26.306306 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 08:21:26 crc kubenswrapper[4982]: I0123 08:21:26.329490 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 08:21:26 crc kubenswrapper[4982]: I0123 08:21:26.337071 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 08:21:27 crc kubenswrapper[4982]: I0123 08:21:27.845116 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" path="/var/lib/kubelet/pods/9e81c71a-ceaf-4e4a-b875-da5f9abcdd06/volumes" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:36.998521 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f45hs"] Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.000446 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.000495 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.000520 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.000529 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.000574 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.000584 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.000598 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="proxy-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.000606 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="proxy-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001377 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001394 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001403 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001411 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-api" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001427 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001436 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001447 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001453 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001466 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" containerName="memcached" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001472 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" containerName="memcached" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001483 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001489 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001499 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="sg-core" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001505 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="sg-core" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001514 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001521 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-api" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001535 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001543 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001555 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="cinder-scheduler" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001561 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="cinder-scheduler" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001570 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001576 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-server" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001586 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001592 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001601 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-metadata" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001607 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-metadata" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001615 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="probe" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001620 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="probe" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001653 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="setup-container" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001661 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="setup-container" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001671 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001676 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001688 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4675771-9ce7-4bd7-a992-4358cbfdc2a0" containerName="nova-scheduler-scheduler" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001694 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4675771-9ce7-4bd7-a992-4358cbfdc2a0" containerName="nova-scheduler-scheduler" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001701 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001707 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001716 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b969cd53-c58b-4a97-89b8-8581874e1336" containerName="keystone-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001723 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b969cd53-c58b-4a97-89b8-8581874e1336" containerName="keystone-api" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001737 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerName="setup-container" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001745 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerName="setup-container" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001757 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4741be-bf98-457a-9275-45405ac3f257" containerName="galera" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001763 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4741be-bf98-457a-9275-45405ac3f257" containerName="galera" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001771 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="rsync" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001777 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="rsync" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001783 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-reaper" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001790 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-reaper" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001800 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerName="rabbitmq" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001828 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerName="rabbitmq" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001837 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-updater" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001844 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-updater" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001852 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001858 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001870 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001877 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-server" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001885 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="swift-recon-cron" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001891 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="swift-recon-cron" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001901 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001906 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001914 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4741be-bf98-457a-9275-45405ac3f257" containerName="mysql-bootstrap" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001920 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4741be-bf98-457a-9275-45405ac3f257" containerName="mysql-bootstrap" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001927 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001932 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001943 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-expirer" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001948 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-expirer" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001959 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="rabbitmq" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001964 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="rabbitmq" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001975 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-central-agent" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001981 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-central-agent" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.001993 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.001998 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002007 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271fae4a-bd02-4281-a588-2784769ed552" containerName="nova-cell0-conductor-conductor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002013 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="271fae4a-bd02-4281-a588-2784769ed552" containerName="nova-cell0-conductor-conductor" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002020 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002025 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002032 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002038 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002045 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002051 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-api" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002063 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002072 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002084 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="openstack-network-exporter" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002090 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="openstack-network-exporter" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002097 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad26b54-da5c-40d5-a715-b47a4f59b4a7" containerName="nova-cell1-conductor-conductor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002103 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad26b54-da5c-40d5-a715-b47a4f59b4a7" containerName="nova-cell1-conductor-conductor" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002113 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002118 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002179 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002186 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-server" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002195 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002200 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002225 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002232 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002246 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002253 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002264 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server-init" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002273 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server-init" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002280 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002285 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002294 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-notification-agent" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002300 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-notification-agent" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002310 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938a15bf-06b9-40ed-89ac-ce3da6052837" containerName="kube-state-metrics" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002316 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="938a15bf-06b9-40ed-89ac-ce3da6052837" containerName="kube-state-metrics" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002325 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002332 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002341 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="ovn-northd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002346 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="ovn-northd" Jan 23 08:21:37 crc kubenswrapper[4982]: E0123 08:21:37.002355 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-updater" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002362 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-updater" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002583 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002599 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovsdb-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002608 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002651 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="sg-core" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002667 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="openstack-network-exporter" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002678 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002690 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0e89b0-a9ad-4104-a32e-0cc150d24c60" containerName="ovs-vswitchd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002698 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002708 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-central-agent" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002722 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="22f6dbc6-8aa7-49b3-9d7d-6fb5fdb78d16" containerName="memcached" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002730 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002737 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b4b8957-4a36-4d8b-a591-fdcbe696c808" containerName="rabbitmq" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002746 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002755 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002764 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-updater" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002770 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002776 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b969cd53-c58b-4a97-89b8-8581874e1336" containerName="keystone-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002785 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d23dde-8aac-481b-bfdb-ce3315166bc4" containerName="rabbitmq" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002791 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8b0b53-6555-429b-a4d9-b0cb0b1e04cb" containerName="cinder-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002798 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e81c71a-ceaf-4e4a-b875-da5f9abcdd06" containerName="ovn-northd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002817 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002824 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad26b54-da5c-40d5-a715-b47a4f59b4a7" containerName="nova-cell1-conductor-conductor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002830 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002840 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002850 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4675771-9ce7-4bd7-a992-4358cbfdc2a0" containerName="nova-scheduler-scheduler" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002858 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002870 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="271fae4a-bd02-4281-a588-2784769ed552" containerName="nova-cell0-conductor-conductor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002879 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="ceilometer-notification-agent" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002886 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002893 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-expirer" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002903 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="probe" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002910 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002921 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002929 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="be86b806-f778-4826-8cfa-c29d366c9af5" containerName="barbican-keystone-listener-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002936 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002945 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e7b450-ab50-4a27-ae68-450c34eb2cba" containerName="proxy-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002952 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d81a8b1-9b52-44df-bc0b-7852d3b8570c" containerName="cinder-scheduler" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002959 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4741be-bf98-457a-9275-45405ac3f257" containerName="galera" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002967 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-updater" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002974 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="938a15bf-06b9-40ed-89ac-ce3da6052837" containerName="kube-state-metrics" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002981 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5dfad61-3f03-4cb1-b9b6-6f842bf4dd1b" containerName="placement-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002990 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49f67d9-be69-448a-9f50-cf2801ed1528" containerName="barbican-worker-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.002997 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="497bf205-5985-411f-960e-2364563ae9e1" containerName="glance-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003004 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-server" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003013 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="10c2a3b1-f4b9-4388-9261-fa703cfaf757" containerName="nova-metadata-metadata" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003021 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5448f946-ae45-4ef2-84fb-d848a335c81e" containerName="neutron-httpd" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003031 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="rsync" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003036 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="container-replicator" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003045 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b7b481-7320-4d03-a230-a173c66dae36" containerName="barbican-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003052 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="object-auditor" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003061 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="account-reaper" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003070 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a96cf7-6cad-496c-9aa6-523fce7af3ff" containerName="glance-log" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003075 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1bed17-dcb3-462e-8227-272c4c1fbc16" containerName="nova-api-api" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.003082 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1d3fcb-23d2-4312-8ba0-3c25cf3fb1d1" containerName="swift-recon-cron" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.004359 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.032215 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f45hs"] Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.108277 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-utilities\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.108357 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gc5n\" (UniqueName: \"kubernetes.io/projected/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-kube-api-access-9gc5n\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.109041 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-catalog-content\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.211228 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-catalog-content\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.211655 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-utilities\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.211827 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gc5n\" (UniqueName: \"kubernetes.io/projected/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-kube-api-access-9gc5n\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.211850 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-catalog-content\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.212064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-utilities\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.236394 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gc5n\" (UniqueName: \"kubernetes.io/projected/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-kube-api-access-9gc5n\") pod \"community-operators-f45hs\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.333232 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.927314 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f45hs"] Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.932750 4982 scope.go:117] "RemoveContainer" containerID="857021a4975f214c420aba4c2fef9f900f3e0c5a5cc647383b732a391cb0991e" Jan 23 08:21:37 crc kubenswrapper[4982]: I0123 08:21:37.995107 4982 scope.go:117] "RemoveContainer" containerID="24b2d9e669bda05086462abeca72d8b1c195e586203509c1f6a637c2037dd062" Jan 23 08:21:38 crc kubenswrapper[4982]: I0123 08:21:38.020443 4982 scope.go:117] "RemoveContainer" containerID="d8d5274a44b413a7337658e8c8a1cdb93721aa9650720a829df365863e2285cd" Jan 23 08:21:38 crc kubenswrapper[4982]: I0123 08:21:38.044783 4982 scope.go:117] "RemoveContainer" containerID="881b906ef1be3ae613692e976ccc55331837c84c9820bb020dc06fe11f437493" Jan 23 08:21:38 crc kubenswrapper[4982]: I0123 08:21:38.416614 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f45hs" event={"ID":"e58ba121-cdee-48e3-b21a-fff2b3dcafbb","Type":"ContainerStarted","Data":"e7f41ca32c1384e6ea99df3b95e4c27e620d937f640e06024dec0501c5775046"} Jan 23 08:21:39 crc kubenswrapper[4982]: I0123 08:21:39.442142 4982 generic.go:334] "Generic (PLEG): container finished" podID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerID="712b866df06b2c7ab5fa5a63f51bc82cc8ee943cb6b14c5abc8507f84aa82bcf" exitCode=0 Jan 23 08:21:39 crc kubenswrapper[4982]: I0123 08:21:39.442220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f45hs" event={"ID":"e58ba121-cdee-48e3-b21a-fff2b3dcafbb","Type":"ContainerDied","Data":"712b866df06b2c7ab5fa5a63f51bc82cc8ee943cb6b14c5abc8507f84aa82bcf"} Jan 23 08:21:41 crc kubenswrapper[4982]: I0123 08:21:41.435655 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:21:41 crc kubenswrapper[4982]: I0123 08:21:41.436272 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:21:41 crc kubenswrapper[4982]: I0123 08:21:41.467544 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f45hs" event={"ID":"e58ba121-cdee-48e3-b21a-fff2b3dcafbb","Type":"ContainerStarted","Data":"8919b742c36b08c229b4a4c78a38b937a207361102885c465ffdca232bb9dd69"} Jan 23 08:21:42 crc kubenswrapper[4982]: I0123 08:21:42.482754 4982 generic.go:334] "Generic (PLEG): container finished" podID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerID="8919b742c36b08c229b4a4c78a38b937a207361102885c465ffdca232bb9dd69" exitCode=0 Jan 23 08:21:42 crc kubenswrapper[4982]: I0123 08:21:42.482851 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f45hs" event={"ID":"e58ba121-cdee-48e3-b21a-fff2b3dcafbb","Type":"ContainerDied","Data":"8919b742c36b08c229b4a4c78a38b937a207361102885c465ffdca232bb9dd69"} Jan 23 08:21:44 crc kubenswrapper[4982]: I0123 08:21:44.503235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f45hs" event={"ID":"e58ba121-cdee-48e3-b21a-fff2b3dcafbb","Type":"ContainerStarted","Data":"7ee7426c9f537f200def0a816568fccb9e367ad6081f4df8e76a27702742eebc"} Jan 23 08:21:44 crc kubenswrapper[4982]: I0123 08:21:44.528378 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f45hs" podStartSLOduration=4.628514887 podStartE2EDuration="8.528356666s" podCreationTimestamp="2026-01-23 08:21:36 +0000 UTC" firstStartedPulling="2026-01-23 08:21:39.443542504 +0000 UTC m=+1573.923905890" lastFinishedPulling="2026-01-23 08:21:43.343384283 +0000 UTC m=+1577.823747669" observedRunningTime="2026-01-23 08:21:44.520836105 +0000 UTC m=+1579.001199491" watchObservedRunningTime="2026-01-23 08:21:44.528356666 +0000 UTC m=+1579.008720052" Jan 23 08:21:47 crc kubenswrapper[4982]: I0123 08:21:47.333395 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:47 crc kubenswrapper[4982]: I0123 08:21:47.333826 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:47 crc kubenswrapper[4982]: I0123 08:21:47.381749 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:48 crc kubenswrapper[4982]: I0123 08:21:48.580058 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:48 crc kubenswrapper[4982]: I0123 08:21:48.630691 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f45hs"] Jan 23 08:21:50 crc kubenswrapper[4982]: I0123 08:21:50.552559 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f45hs" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="registry-server" containerID="cri-o://7ee7426c9f537f200def0a816568fccb9e367ad6081f4df8e76a27702742eebc" gracePeriod=2 Jan 23 08:21:52 crc kubenswrapper[4982]: I0123 08:21:52.573573 4982 generic.go:334] "Generic (PLEG): container finished" podID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerID="7ee7426c9f537f200def0a816568fccb9e367ad6081f4df8e76a27702742eebc" exitCode=0 Jan 23 08:21:52 crc kubenswrapper[4982]: I0123 08:21:52.573652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f45hs" event={"ID":"e58ba121-cdee-48e3-b21a-fff2b3dcafbb","Type":"ContainerDied","Data":"7ee7426c9f537f200def0a816568fccb9e367ad6081f4df8e76a27702742eebc"} Jan 23 08:21:52 crc kubenswrapper[4982]: I0123 08:21:52.959439 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.070911 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-utilities\") pod \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.071051 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-catalog-content\") pod \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.071098 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gc5n\" (UniqueName: \"kubernetes.io/projected/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-kube-api-access-9gc5n\") pod \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\" (UID: \"e58ba121-cdee-48e3-b21a-fff2b3dcafbb\") " Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.086077 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-kube-api-access-9gc5n" (OuterVolumeSpecName: "kube-api-access-9gc5n") pod "e58ba121-cdee-48e3-b21a-fff2b3dcafbb" (UID: "e58ba121-cdee-48e3-b21a-fff2b3dcafbb"). InnerVolumeSpecName "kube-api-access-9gc5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.086674 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-utilities" (OuterVolumeSpecName: "utilities") pod "e58ba121-cdee-48e3-b21a-fff2b3dcafbb" (UID: "e58ba121-cdee-48e3-b21a-fff2b3dcafbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.136467 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e58ba121-cdee-48e3-b21a-fff2b3dcafbb" (UID: "e58ba121-cdee-48e3-b21a-fff2b3dcafbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.173277 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.173313 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.173327 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gc5n\" (UniqueName: \"kubernetes.io/projected/e58ba121-cdee-48e3-b21a-fff2b3dcafbb-kube-api-access-9gc5n\") on node \"crc\" DevicePath \"\"" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.591544 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f45hs" event={"ID":"e58ba121-cdee-48e3-b21a-fff2b3dcafbb","Type":"ContainerDied","Data":"e7f41ca32c1384e6ea99df3b95e4c27e620d937f640e06024dec0501c5775046"} Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.591604 4982 scope.go:117] "RemoveContainer" containerID="7ee7426c9f537f200def0a816568fccb9e367ad6081f4df8e76a27702742eebc" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.591599 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f45hs" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.625763 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f45hs"] Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.626769 4982 scope.go:117] "RemoveContainer" containerID="8919b742c36b08c229b4a4c78a38b937a207361102885c465ffdca232bb9dd69" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.631658 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f45hs"] Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.649420 4982 scope.go:117] "RemoveContainer" containerID="712b866df06b2c7ab5fa5a63f51bc82cc8ee943cb6b14c5abc8507f84aa82bcf" Jan 23 08:21:53 crc kubenswrapper[4982]: I0123 08:21:53.838113 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" path="/var/lib/kubelet/pods/e58ba121-cdee-48e3-b21a-fff2b3dcafbb/volumes" Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.435439 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.435955 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.435992 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.436612 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.436673 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" gracePeriod=600 Jan 23 08:22:11 crc kubenswrapper[4982]: E0123 08:22:11.567125 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.767604 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" exitCode=0 Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.767680 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107"} Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.767783 4982 scope.go:117] "RemoveContainer" containerID="4ff1d104ed341faafc779f69e2fa548cf764b2dac0f08d4eb5491e25e74751a0" Jan 23 08:22:11 crc kubenswrapper[4982]: I0123 08:22:11.768328 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:22:11 crc kubenswrapper[4982]: E0123 08:22:11.768758 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:22:24 crc kubenswrapper[4982]: I0123 08:22:24.828534 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:22:24 crc kubenswrapper[4982]: E0123 08:22:24.829449 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:22:37 crc kubenswrapper[4982]: I0123 08:22:37.827223 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:22:37 crc kubenswrapper[4982]: E0123 08:22:37.828083 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.003942 4982 scope.go:117] "RemoveContainer" containerID="fc5aed5043540bc597a43450b8e378498c0ebb4cf006a6e80b6393e90b942401" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.053360 4982 scope.go:117] "RemoveContainer" containerID="92cc8670b02207b8d76731a04ece2d09171e94fc63450601eeb112b6ff6b02d2" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.095640 4982 scope.go:117] "RemoveContainer" containerID="1426f70b9f3cd5368570e826cee82762c724ee2dd3f610396a5107e0fe916aea" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.120921 4982 scope.go:117] "RemoveContainer" containerID="35d559689a418b4577df8ebd068ae7c6762997056d9969b9fcfea00766c8a2a0" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.167968 4982 scope.go:117] "RemoveContainer" containerID="64e0007da8498c9c0f6c2a2120c194c4dd9b01a8bed0b734b2071d6d4123d58b" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.193698 4982 scope.go:117] "RemoveContainer" containerID="6440efd302531e40a8f133fa95e19b67d505d0b86e33f518e7d4d7f7fcc9d03d" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.225995 4982 scope.go:117] "RemoveContainer" containerID="4e5ecc1d662fcad4e262e1878c9f735e49eb6a3856bf628eedbc222fa5f0bff6" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.257660 4982 scope.go:117] "RemoveContainer" containerID="676ce9253eff14b373d9ee4a70619b52f039634803a44d0ea713217ce8075542" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.278386 4982 scope.go:117] "RemoveContainer" containerID="6e9f536519bada71797b86bc7eafccf067977908d806bf8ea7d4cbd0aeb4e766" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.313489 4982 scope.go:117] "RemoveContainer" containerID="ab46f0bb82a7f706ae640b6fe177c3d1384928d9dd89e36ffabd395368fe830c" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.340993 4982 scope.go:117] "RemoveContainer" containerID="2005a7393e26c0e16e277225d7aa3a21d05deced5aba1019940b9c9376b287e6" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.371490 4982 scope.go:117] "RemoveContainer" containerID="15e6f505315b1c8001f1aa5174ed3d2b374f4b8baa37a83772a2ff9200f5a655" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.395830 4982 scope.go:117] "RemoveContainer" containerID="2a92e87a0f81ea0565e71c6d4b8c5e7c34d799fb7d29c296c158ddef3d95a041" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.417085 4982 scope.go:117] "RemoveContainer" containerID="1084e86a463c1d4264f9f8b01b115bd2a0f698d97f3fbe21d0cdc83bcf455e68" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.463616 4982 scope.go:117] "RemoveContainer" containerID="29eb3a543d5ebdca36d0fd83d091f9af6c5e24ff9135abc626ab6043bf685fa1" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.505613 4982 scope.go:117] "RemoveContainer" containerID="8e9f68b22806c0d91023027c2f74867c11015aa6e629f98eba701236324fb716" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.552078 4982 scope.go:117] "RemoveContainer" containerID="f883d2cd12e8bbc93ce18594c30be9e5a82eb7cddb1d1c8ec54945077ddf5a84" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.577076 4982 scope.go:117] "RemoveContainer" containerID="b457d4fe1ca2f0858e1e5fa829f17855220a24e0cd7893a3da815b99c94cffb6" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.612743 4982 scope.go:117] "RemoveContainer" containerID="ebb645735179d0c992c61b57c2e7185ac4541a516e9c3b49608bc861dc751bc4" Jan 23 08:22:39 crc kubenswrapper[4982]: I0123 08:22:39.649752 4982 scope.go:117] "RemoveContainer" containerID="261cba7e7891939fcd41a39d9b55b43d04b45f8f7082417d66f929108ba417c9" Jan 23 08:22:52 crc kubenswrapper[4982]: I0123 08:22:52.827909 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:22:52 crc kubenswrapper[4982]: E0123 08:22:52.828597 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:23:04 crc kubenswrapper[4982]: I0123 08:23:04.827690 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:23:04 crc kubenswrapper[4982]: E0123 08:23:04.828446 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:23:16 crc kubenswrapper[4982]: I0123 08:23:16.828884 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:23:16 crc kubenswrapper[4982]: E0123 08:23:16.829758 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:23:28 crc kubenswrapper[4982]: I0123 08:23:28.827072 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:23:28 crc kubenswrapper[4982]: E0123 08:23:28.827906 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.155430 4982 scope.go:117] "RemoveContainer" containerID="40014c145259aa5d0d53ce81504b0a95f5a7c5981fc26df4df0e2b27a5cfe1fe" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.186460 4982 scope.go:117] "RemoveContainer" containerID="64abd397b9bbafd314344b7fea5fc608e939576a618d014cf3584a03cc7dc57b" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.222768 4982 scope.go:117] "RemoveContainer" containerID="37ff5c16cf8a0674dc0470f1bef769079110379662beaef3634cb7d4aa2c0729" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.250134 4982 scope.go:117] "RemoveContainer" containerID="d70784a22b80ca41960a42faa1acf918f4a1cd9702bca90db1a12337c2709198" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.325897 4982 scope.go:117] "RemoveContainer" containerID="5f49e46ef3f184cffc0a3f5151239a6c85e934892cf0f9301b74cc75f12bd907" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.343662 4982 scope.go:117] "RemoveContainer" containerID="03db8693dc2242a529ba2dd928929ed07a9fba357317ff119471a64da1e30cda" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.395719 4982 scope.go:117] "RemoveContainer" containerID="61fdd6cb59e3aa39174835f28bd3d7788cf355f682f55a50923e04ba5030d9b0" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.415423 4982 scope.go:117] "RemoveContainer" containerID="4808f5576fa92a9d9ec349f9566ab6d666670b6c27cde7f6f22033657da5a1b8" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.454028 4982 scope.go:117] "RemoveContainer" containerID="9124ee3b4fec4679b2565ead7eeef31d586524dda44fbf7a4b3655c10fedd22f" Jan 23 08:23:40 crc kubenswrapper[4982]: I0123 08:23:40.475973 4982 scope.go:117] "RemoveContainer" containerID="fa2e7ff99f086c9757ad235e8999d715b4462e740d7cca574f91d905931f959d" Jan 23 08:23:41 crc kubenswrapper[4982]: I0123 08:23:41.827072 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:23:41 crc kubenswrapper[4982]: E0123 08:23:41.827313 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:23:54 crc kubenswrapper[4982]: I0123 08:23:54.827743 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:23:54 crc kubenswrapper[4982]: E0123 08:23:54.830355 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:24:06 crc kubenswrapper[4982]: I0123 08:24:06.827859 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:24:06 crc kubenswrapper[4982]: E0123 08:24:06.828888 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:24:18 crc kubenswrapper[4982]: I0123 08:24:18.827301 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:24:18 crc kubenswrapper[4982]: E0123 08:24:18.827831 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:24:31 crc kubenswrapper[4982]: I0123 08:24:31.827193 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:24:31 crc kubenswrapper[4982]: E0123 08:24:31.828150 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:24:40 crc kubenswrapper[4982]: I0123 08:24:40.612442 4982 scope.go:117] "RemoveContainer" containerID="e457861fdfc37c83462e545101e59ed7ce0c7365f3d95c0c6306521b20072e73" Jan 23 08:24:40 crc kubenswrapper[4982]: I0123 08:24:40.636936 4982 scope.go:117] "RemoveContainer" containerID="880314eb8717f37be8615a3ae2dda79843a37da64b680c2fb416a7297f33a946" Jan 23 08:24:40 crc kubenswrapper[4982]: I0123 08:24:40.664371 4982 scope.go:117] "RemoveContainer" containerID="359996c1743cb7e90b4d2853dd4efb9d6dfccb7579be55f070340964d012524b" Jan 23 08:24:40 crc kubenswrapper[4982]: I0123 08:24:40.689196 4982 scope.go:117] "RemoveContainer" containerID="12f0cebe9233222bfedaa257c3731f1a120a201d5d813c1562bc768ebb34db66" Jan 23 08:24:40 crc kubenswrapper[4982]: I0123 08:24:40.737820 4982 scope.go:117] "RemoveContainer" containerID="a551a54cc37ab343d96eabf6ff0b6e35eb542ef58d634189f03450540bbe97a0" Jan 23 08:24:40 crc kubenswrapper[4982]: I0123 08:24:40.754689 4982 scope.go:117] "RemoveContainer" containerID="1ef55c9965f862203f61d478c3c3295d2f427277908086d43db10bb9f98dfb4f" Jan 23 08:24:40 crc kubenswrapper[4982]: I0123 08:24:40.785077 4982 scope.go:117] "RemoveContainer" containerID="00f8d8de5d875cede43c41fa7c25b37bdf2460d54d84168ccc836e2dd4d81ffa" Jan 23 08:24:44 crc kubenswrapper[4982]: I0123 08:24:44.827306 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:24:44 crc kubenswrapper[4982]: E0123 08:24:44.828305 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:24:56 crc kubenswrapper[4982]: I0123 08:24:56.827758 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:24:56 crc kubenswrapper[4982]: E0123 08:24:56.829037 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:25:08 crc kubenswrapper[4982]: I0123 08:25:08.827287 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:25:08 crc kubenswrapper[4982]: E0123 08:25:08.828100 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:25:22 crc kubenswrapper[4982]: I0123 08:25:22.828123 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:25:22 crc kubenswrapper[4982]: E0123 08:25:22.828962 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:25:36 crc kubenswrapper[4982]: I0123 08:25:36.827225 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:25:36 crc kubenswrapper[4982]: E0123 08:25:36.827883 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:25:40 crc kubenswrapper[4982]: I0123 08:25:40.918186 4982 scope.go:117] "RemoveContainer" containerID="71edc0177ff0d5933505b059608613209efb4dbc155b1725e15071efbadbedeb" Jan 23 08:25:40 crc kubenswrapper[4982]: I0123 08:25:40.978548 4982 scope.go:117] "RemoveContainer" containerID="6a770ba4faff5c9a5f7042939adde83760f92b60f02a6a606ff4ba387c94c058" Jan 23 08:25:50 crc kubenswrapper[4982]: I0123 08:25:50.827586 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:25:50 crc kubenswrapper[4982]: E0123 08:25:50.828428 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:26:01 crc kubenswrapper[4982]: I0123 08:26:01.827956 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:26:01 crc kubenswrapper[4982]: E0123 08:26:01.828991 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:26:14 crc kubenswrapper[4982]: I0123 08:26:14.827150 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:26:14 crc kubenswrapper[4982]: E0123 08:26:14.827980 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:26:28 crc kubenswrapper[4982]: I0123 08:26:28.827516 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:26:28 crc kubenswrapper[4982]: E0123 08:26:28.828470 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:26:41 crc kubenswrapper[4982]: I0123 08:26:41.061506 4982 scope.go:117] "RemoveContainer" containerID="60990b5983a5ed6f357cfa2bc9fee6928a439ca6581002d82be773bb5dcd8535" Jan 23 08:26:41 crc kubenswrapper[4982]: I0123 08:26:41.084084 4982 scope.go:117] "RemoveContainer" containerID="6c840aeda273bccf85f7f5295d0123e2191ab16e2c3ce7f4c6ffeaffac682224" Jan 23 08:26:41 crc kubenswrapper[4982]: I0123 08:26:41.112227 4982 scope.go:117] "RemoveContainer" containerID="bc2badabfe3be71114926cfc4d259bac2c6e7c4fee0fe0967b92bbf9f88f6e08" Jan 23 08:26:41 crc kubenswrapper[4982]: I0123 08:26:41.137436 4982 scope.go:117] "RemoveContainer" containerID="bc50b6a7dac2bafbc66b790973c209bbad338f83281ec4c65bb5105ffb7b24d5" Jan 23 08:26:41 crc kubenswrapper[4982]: I0123 08:26:41.158121 4982 scope.go:117] "RemoveContainer" containerID="929fcb8adb709d932397978e38905f34dd9ae86e421d1c518211189261ab24d5" Jan 23 08:26:42 crc kubenswrapper[4982]: I0123 08:26:42.827726 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:26:42 crc kubenswrapper[4982]: E0123 08:26:42.827987 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:26:54 crc kubenswrapper[4982]: I0123 08:26:54.827810 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:26:54 crc kubenswrapper[4982]: E0123 08:26:54.828473 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:27:06 crc kubenswrapper[4982]: I0123 08:27:06.827665 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:27:06 crc kubenswrapper[4982]: E0123 08:27:06.828445 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:27:21 crc kubenswrapper[4982]: I0123 08:27:21.827688 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:27:22 crc kubenswrapper[4982]: I0123 08:27:22.534710 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"7f74b2e936a3bf06df20c5190605703473a6b63fcc87f6e2b4a4ba23182c04d8"} Jan 23 08:29:41 crc kubenswrapper[4982]: I0123 08:29:41.436159 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:29:41 crc kubenswrapper[4982]: I0123 08:29:41.436673 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.149245 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8"] Jan 23 08:30:00 crc kubenswrapper[4982]: E0123 08:30:00.152974 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="registry-server" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.153159 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="registry-server" Jan 23 08:30:00 crc kubenswrapper[4982]: E0123 08:30:00.153242 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="extract-content" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.153353 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="extract-content" Jan 23 08:30:00 crc kubenswrapper[4982]: E0123 08:30:00.153570 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="extract-utilities" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.153750 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="extract-utilities" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.154192 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e58ba121-cdee-48e3-b21a-fff2b3dcafbb" containerName="registry-server" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.155218 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.157202 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.160408 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.163681 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8"] Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.216587 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njk4q\" (UniqueName: \"kubernetes.io/projected/08cfa26e-8192-42fa-8290-86523a48a7ec-kube-api-access-njk4q\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.216834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cfa26e-8192-42fa-8290-86523a48a7ec-secret-volume\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.216933 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cfa26e-8192-42fa-8290-86523a48a7ec-config-volume\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.317968 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njk4q\" (UniqueName: \"kubernetes.io/projected/08cfa26e-8192-42fa-8290-86523a48a7ec-kube-api-access-njk4q\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.318053 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cfa26e-8192-42fa-8290-86523a48a7ec-secret-volume\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.318079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cfa26e-8192-42fa-8290-86523a48a7ec-config-volume\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.318938 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cfa26e-8192-42fa-8290-86523a48a7ec-config-volume\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.330282 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cfa26e-8192-42fa-8290-86523a48a7ec-secret-volume\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.337869 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njk4q\" (UniqueName: \"kubernetes.io/projected/08cfa26e-8192-42fa-8290-86523a48a7ec-kube-api-access-njk4q\") pod \"collect-profiles-29485950-glxt8\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.485787 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:00 crc kubenswrapper[4982]: I0123 08:30:00.919309 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8"] Jan 23 08:30:01 crc kubenswrapper[4982]: I0123 08:30:01.176567 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" event={"ID":"08cfa26e-8192-42fa-8290-86523a48a7ec","Type":"ContainerStarted","Data":"0f720a2146a6395810e2fbd6bc26758d4c522cb24a107d7b8f8b356ff3ca6b1b"} Jan 23 08:30:01 crc kubenswrapper[4982]: I0123 08:30:01.176978 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" event={"ID":"08cfa26e-8192-42fa-8290-86523a48a7ec","Type":"ContainerStarted","Data":"0ddbc5d466f5101899b34b13b0d8154d362f7d077f7c8b95c1c896076f23a292"} Jan 23 08:30:01 crc kubenswrapper[4982]: I0123 08:30:01.195031 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" podStartSLOduration=1.195011663 podStartE2EDuration="1.195011663s" podCreationTimestamp="2026-01-23 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:30:01.188709004 +0000 UTC m=+2075.669072410" watchObservedRunningTime="2026-01-23 08:30:01.195011663 +0000 UTC m=+2075.675375049" Jan 23 08:30:02 crc kubenswrapper[4982]: I0123 08:30:02.185945 4982 generic.go:334] "Generic (PLEG): container finished" podID="08cfa26e-8192-42fa-8290-86523a48a7ec" containerID="0f720a2146a6395810e2fbd6bc26758d4c522cb24a107d7b8f8b356ff3ca6b1b" exitCode=0 Jan 23 08:30:02 crc kubenswrapper[4982]: I0123 08:30:02.186045 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" event={"ID":"08cfa26e-8192-42fa-8290-86523a48a7ec","Type":"ContainerDied","Data":"0f720a2146a6395810e2fbd6bc26758d4c522cb24a107d7b8f8b356ff3ca6b1b"} Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.544612 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.681124 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njk4q\" (UniqueName: \"kubernetes.io/projected/08cfa26e-8192-42fa-8290-86523a48a7ec-kube-api-access-njk4q\") pod \"08cfa26e-8192-42fa-8290-86523a48a7ec\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.681233 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cfa26e-8192-42fa-8290-86523a48a7ec-config-volume\") pod \"08cfa26e-8192-42fa-8290-86523a48a7ec\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.681286 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cfa26e-8192-42fa-8290-86523a48a7ec-secret-volume\") pod \"08cfa26e-8192-42fa-8290-86523a48a7ec\" (UID: \"08cfa26e-8192-42fa-8290-86523a48a7ec\") " Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.682220 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cfa26e-8192-42fa-8290-86523a48a7ec-config-volume" (OuterVolumeSpecName: "config-volume") pod "08cfa26e-8192-42fa-8290-86523a48a7ec" (UID: "08cfa26e-8192-42fa-8290-86523a48a7ec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.686984 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cfa26e-8192-42fa-8290-86523a48a7ec-kube-api-access-njk4q" (OuterVolumeSpecName: "kube-api-access-njk4q") pod "08cfa26e-8192-42fa-8290-86523a48a7ec" (UID: "08cfa26e-8192-42fa-8290-86523a48a7ec"). InnerVolumeSpecName "kube-api-access-njk4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.687743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08cfa26e-8192-42fa-8290-86523a48a7ec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08cfa26e-8192-42fa-8290-86523a48a7ec" (UID: "08cfa26e-8192-42fa-8290-86523a48a7ec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.782728 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njk4q\" (UniqueName: \"kubernetes.io/projected/08cfa26e-8192-42fa-8290-86523a48a7ec-kube-api-access-njk4q\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.782792 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08cfa26e-8192-42fa-8290-86523a48a7ec-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:03 crc kubenswrapper[4982]: I0123 08:30:03.782802 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08cfa26e-8192-42fa-8290-86523a48a7ec-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:04 crc kubenswrapper[4982]: I0123 08:30:04.205606 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" event={"ID":"08cfa26e-8192-42fa-8290-86523a48a7ec","Type":"ContainerDied","Data":"0ddbc5d466f5101899b34b13b0d8154d362f7d077f7c8b95c1c896076f23a292"} Jan 23 08:30:04 crc kubenswrapper[4982]: I0123 08:30:04.205686 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddbc5d466f5101899b34b13b0d8154d362f7d077f7c8b95c1c896076f23a292" Jan 23 08:30:04 crc kubenswrapper[4982]: I0123 08:30:04.205739 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8" Jan 23 08:30:04 crc kubenswrapper[4982]: I0123 08:30:04.263076 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk"] Jan 23 08:30:04 crc kubenswrapper[4982]: I0123 08:30:04.270155 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485905-slggk"] Jan 23 08:30:05 crc kubenswrapper[4982]: I0123 08:30:05.837517 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d7b53f4-cc2b-4572-afee-60441a232155" path="/var/lib/kubelet/pods/3d7b53f4-cc2b-4572-afee-60441a232155/volumes" Jan 23 08:30:11 crc kubenswrapper[4982]: I0123 08:30:11.436293 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:30:11 crc kubenswrapper[4982]: I0123 08:30:11.437096 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.489677 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rrf2d"] Jan 23 08:30:37 crc kubenswrapper[4982]: E0123 08:30:37.490584 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cfa26e-8192-42fa-8290-86523a48a7ec" containerName="collect-profiles" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.490603 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cfa26e-8192-42fa-8290-86523a48a7ec" containerName="collect-profiles" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.490809 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cfa26e-8192-42fa-8290-86523a48a7ec" containerName="collect-profiles" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.492022 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.502315 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrf2d"] Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.605537 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-utilities\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.605822 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-catalog-content\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.605877 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjfjn\" (UniqueName: \"kubernetes.io/projected/7af0b7a3-4f83-46c7-8947-a5cfb7423600-kube-api-access-zjfjn\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.707879 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-utilities\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.707958 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-catalog-content\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.707983 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjfjn\" (UniqueName: \"kubernetes.io/projected/7af0b7a3-4f83-46c7-8947-a5cfb7423600-kube-api-access-zjfjn\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.708498 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-utilities\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.708512 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-catalog-content\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.731963 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjfjn\" (UniqueName: \"kubernetes.io/projected/7af0b7a3-4f83-46c7-8947-a5cfb7423600-kube-api-access-zjfjn\") pod \"certified-operators-rrf2d\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:37 crc kubenswrapper[4982]: I0123 08:30:37.816748 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:38 crc kubenswrapper[4982]: I0123 08:30:38.366268 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrf2d"] Jan 23 08:30:38 crc kubenswrapper[4982]: I0123 08:30:38.692942 4982 generic.go:334] "Generic (PLEG): container finished" podID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerID="13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7" exitCode=0 Jan 23 08:30:38 crc kubenswrapper[4982]: I0123 08:30:38.692996 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrf2d" event={"ID":"7af0b7a3-4f83-46c7-8947-a5cfb7423600","Type":"ContainerDied","Data":"13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7"} Jan 23 08:30:38 crc kubenswrapper[4982]: I0123 08:30:38.693279 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrf2d" event={"ID":"7af0b7a3-4f83-46c7-8947-a5cfb7423600","Type":"ContainerStarted","Data":"f526577a839668d81163e4ade2c19da528bbda8bfdb7ea7ac37dd17a02aad7fe"} Jan 23 08:30:38 crc kubenswrapper[4982]: I0123 08:30:38.694985 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:30:40 crc kubenswrapper[4982]: I0123 08:30:40.712871 4982 generic.go:334] "Generic (PLEG): container finished" podID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerID="a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03" exitCode=0 Jan 23 08:30:40 crc kubenswrapper[4982]: I0123 08:30:40.713245 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrf2d" event={"ID":"7af0b7a3-4f83-46c7-8947-a5cfb7423600","Type":"ContainerDied","Data":"a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03"} Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.328739 4982 scope.go:117] "RemoveContainer" containerID="2d26ab11cca23f54664a68d7e65fbe94f1510759c35a3d9cfaa35fbf61a154f8" Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.435689 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.435982 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.436020 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.436661 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f74b2e936a3bf06df20c5190605703473a6b63fcc87f6e2b4a4ba23182c04d8"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.436715 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://7f74b2e936a3bf06df20c5190605703473a6b63fcc87f6e2b4a4ba23182c04d8" gracePeriod=600 Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.724711 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="7f74b2e936a3bf06df20c5190605703473a6b63fcc87f6e2b4a4ba23182c04d8" exitCode=0 Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.724780 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"7f74b2e936a3bf06df20c5190605703473a6b63fcc87f6e2b4a4ba23182c04d8"} Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.724838 4982 scope.go:117] "RemoveContainer" containerID="13fab8e46b94daaa7da9aedd764b6212d07261ae7e701820a164fe79143b3107" Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.729541 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrf2d" event={"ID":"7af0b7a3-4f83-46c7-8947-a5cfb7423600","Type":"ContainerStarted","Data":"678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b"} Jan 23 08:30:41 crc kubenswrapper[4982]: I0123 08:30:41.751117 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rrf2d" podStartSLOduration=2.272404749 podStartE2EDuration="4.751098615s" podCreationTimestamp="2026-01-23 08:30:37 +0000 UTC" firstStartedPulling="2026-01-23 08:30:38.694750739 +0000 UTC m=+2113.175114125" lastFinishedPulling="2026-01-23 08:30:41.173444605 +0000 UTC m=+2115.653807991" observedRunningTime="2026-01-23 08:30:41.750418296 +0000 UTC m=+2116.230781692" watchObservedRunningTime="2026-01-23 08:30:41.751098615 +0000 UTC m=+2116.231462001" Jan 23 08:30:42 crc kubenswrapper[4982]: I0123 08:30:42.739144 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2"} Jan 23 08:30:47 crc kubenswrapper[4982]: I0123 08:30:47.817925 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:47 crc kubenswrapper[4982]: I0123 08:30:47.818471 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:47 crc kubenswrapper[4982]: I0123 08:30:47.870995 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:48 crc kubenswrapper[4982]: I0123 08:30:48.849652 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:48 crc kubenswrapper[4982]: I0123 08:30:48.895583 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrf2d"] Jan 23 08:30:50 crc kubenswrapper[4982]: I0123 08:30:50.803749 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rrf2d" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="registry-server" containerID="cri-o://678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b" gracePeriod=2 Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.284317 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.432949 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjfjn\" (UniqueName: \"kubernetes.io/projected/7af0b7a3-4f83-46c7-8947-a5cfb7423600-kube-api-access-zjfjn\") pod \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.433826 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-catalog-content\") pod \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.433947 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-utilities\") pod \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\" (UID: \"7af0b7a3-4f83-46c7-8947-a5cfb7423600\") " Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.435031 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-utilities" (OuterVolumeSpecName: "utilities") pod "7af0b7a3-4f83-46c7-8947-a5cfb7423600" (UID: "7af0b7a3-4f83-46c7-8947-a5cfb7423600"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.438920 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af0b7a3-4f83-46c7-8947-a5cfb7423600-kube-api-access-zjfjn" (OuterVolumeSpecName: "kube-api-access-zjfjn") pod "7af0b7a3-4f83-46c7-8947-a5cfb7423600" (UID: "7af0b7a3-4f83-46c7-8947-a5cfb7423600"). InnerVolumeSpecName "kube-api-access-zjfjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.492639 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7af0b7a3-4f83-46c7-8947-a5cfb7423600" (UID: "7af0b7a3-4f83-46c7-8947-a5cfb7423600"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.535459 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.535499 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjfjn\" (UniqueName: \"kubernetes.io/projected/7af0b7a3-4f83-46c7-8947-a5cfb7423600-kube-api-access-zjfjn\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.535509 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af0b7a3-4f83-46c7-8947-a5cfb7423600-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.816480 4982 generic.go:334] "Generic (PLEG): container finished" podID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerID="678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b" exitCode=0 Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.816533 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrf2d" event={"ID":"7af0b7a3-4f83-46c7-8947-a5cfb7423600","Type":"ContainerDied","Data":"678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b"} Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.816579 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrf2d" event={"ID":"7af0b7a3-4f83-46c7-8947-a5cfb7423600","Type":"ContainerDied","Data":"f526577a839668d81163e4ade2c19da528bbda8bfdb7ea7ac37dd17a02aad7fe"} Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.816597 4982 scope.go:117] "RemoveContainer" containerID="678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.816616 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrf2d" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.846742 4982 scope.go:117] "RemoveContainer" containerID="a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.857050 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrf2d"] Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.864733 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rrf2d"] Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.877056 4982 scope.go:117] "RemoveContainer" containerID="13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.892699 4982 scope.go:117] "RemoveContainer" containerID="678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b" Jan 23 08:30:51 crc kubenswrapper[4982]: E0123 08:30:51.893084 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b\": container with ID starting with 678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b not found: ID does not exist" containerID="678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.893134 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b"} err="failed to get container status \"678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b\": rpc error: code = NotFound desc = could not find container \"678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b\": container with ID starting with 678711e17a3478115f0dd9ed5baa45d634d4a204409bdd1d2bc76674aa9a683b not found: ID does not exist" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.893187 4982 scope.go:117] "RemoveContainer" containerID="a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03" Jan 23 08:30:51 crc kubenswrapper[4982]: E0123 08:30:51.893705 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03\": container with ID starting with a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03 not found: ID does not exist" containerID="a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.893735 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03"} err="failed to get container status \"a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03\": rpc error: code = NotFound desc = could not find container \"a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03\": container with ID starting with a15e0fe1526054b0644c110ab0b5379730ba2dbde83fe33e969a5d8286299f03 not found: ID does not exist" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.893761 4982 scope.go:117] "RemoveContainer" containerID="13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7" Jan 23 08:30:51 crc kubenswrapper[4982]: E0123 08:30:51.894011 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7\": container with ID starting with 13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7 not found: ID does not exist" containerID="13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7" Jan 23 08:30:51 crc kubenswrapper[4982]: I0123 08:30:51.894040 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7"} err="failed to get container status \"13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7\": rpc error: code = NotFound desc = could not find container \"13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7\": container with ID starting with 13e398f200aebff0c80d15632d9ea8698dcc4ca46c1cd8fd7f1ad447caa84ac7 not found: ID does not exist" Jan 23 08:30:53 crc kubenswrapper[4982]: I0123 08:30:53.839153 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" path="/var/lib/kubelet/pods/7af0b7a3-4f83-46c7-8947-a5cfb7423600/volumes" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.297801 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c22qb"] Jan 23 08:32:20 crc kubenswrapper[4982]: E0123 08:32:20.298829 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="extract-utilities" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.298845 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="extract-utilities" Jan 23 08:32:20 crc kubenswrapper[4982]: E0123 08:32:20.298861 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="registry-server" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.298867 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="registry-server" Jan 23 08:32:20 crc kubenswrapper[4982]: E0123 08:32:20.298878 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="extract-content" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.298884 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="extract-content" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.299041 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af0b7a3-4f83-46c7-8947-a5cfb7423600" containerName="registry-server" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.300451 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.321022 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c22qb"] Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.330517 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-catalog-content\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.330599 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65r9\" (UniqueName: \"kubernetes.io/projected/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-kube-api-access-x65r9\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.330650 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-utilities\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.431959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-catalog-content\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.432014 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65r9\" (UniqueName: \"kubernetes.io/projected/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-kube-api-access-x65r9\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.432038 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-utilities\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.432462 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-catalog-content\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.432480 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-utilities\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.452409 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65r9\" (UniqueName: \"kubernetes.io/projected/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-kube-api-access-x65r9\") pod \"redhat-marketplace-c22qb\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.624784 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:20 crc kubenswrapper[4982]: I0123 08:32:20.887327 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c22qb"] Jan 23 08:32:21 crc kubenswrapper[4982]: I0123 08:32:21.593958 4982 generic.go:334] "Generic (PLEG): container finished" podID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerID="d971e3da0195a8162acfa3a824e5407e0b74aba96b36cf1093722cc63e510787" exitCode=0 Jan 23 08:32:21 crc kubenswrapper[4982]: I0123 08:32:21.594048 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22qb" event={"ID":"e39dd6b7-f306-4d42-ae41-600d78bc1a7c","Type":"ContainerDied","Data":"d971e3da0195a8162acfa3a824e5407e0b74aba96b36cf1093722cc63e510787"} Jan 23 08:32:21 crc kubenswrapper[4982]: I0123 08:32:21.594316 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22qb" event={"ID":"e39dd6b7-f306-4d42-ae41-600d78bc1a7c","Type":"ContainerStarted","Data":"ecc4b9ec1f73c2ad984cd03b828998d5bcbf0481b0cbeafcdd0240dd7a5aacc1"} Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.091255 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g55kb"] Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.093172 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.101202 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g55kb"] Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.270614 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-catalog-content\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.270704 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzzk\" (UniqueName: \"kubernetes.io/projected/c2849345-c2ce-4763-aea0-62459d6fd213-kube-api-access-ztzzk\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.270815 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-utilities\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.372337 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-catalog-content\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.372400 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztzzk\" (UniqueName: \"kubernetes.io/projected/c2849345-c2ce-4763-aea0-62459d6fd213-kube-api-access-ztzzk\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.372486 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-utilities\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.372960 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-catalog-content\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.372988 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-utilities\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.398219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztzzk\" (UniqueName: \"kubernetes.io/projected/c2849345-c2ce-4763-aea0-62459d6fd213-kube-api-access-ztzzk\") pod \"redhat-operators-g55kb\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.416173 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.713773 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vhtxr"] Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.716757 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.723412 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhtxr"] Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.881780 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl95s\" (UniqueName: \"kubernetes.io/projected/7a223731-46ea-48c6-be91-12b24ac1fb9e-kube-api-access-nl95s\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.881877 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-utilities\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.881940 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-catalog-content\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.918173 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g55kb"] Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.983756 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-utilities\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.983831 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-catalog-content\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.983917 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl95s\" (UniqueName: \"kubernetes.io/projected/7a223731-46ea-48c6-be91-12b24ac1fb9e-kube-api-access-nl95s\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.984736 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-utilities\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:22 crc kubenswrapper[4982]: I0123 08:32:22.985243 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-catalog-content\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.021919 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl95s\" (UniqueName: \"kubernetes.io/projected/7a223731-46ea-48c6-be91-12b24ac1fb9e-kube-api-access-nl95s\") pod \"community-operators-vhtxr\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.053452 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.584820 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhtxr"] Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.615473 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhtxr" event={"ID":"7a223731-46ea-48c6-be91-12b24ac1fb9e","Type":"ContainerStarted","Data":"6fb8a6c85d6ac2435bba62fdcdc86d1d355fbfb59d02046df96547e0eac0e863"} Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.618835 4982 generic.go:334] "Generic (PLEG): container finished" podID="c2849345-c2ce-4763-aea0-62459d6fd213" containerID="db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9" exitCode=0 Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.618903 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g55kb" event={"ID":"c2849345-c2ce-4763-aea0-62459d6fd213","Type":"ContainerDied","Data":"db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9"} Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.618938 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g55kb" event={"ID":"c2849345-c2ce-4763-aea0-62459d6fd213","Type":"ContainerStarted","Data":"3082f6d26d8b34fda37e9ca62e5eedb74d6c5da9eb6da6f7be59ae619170beb9"} Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.625814 4982 generic.go:334] "Generic (PLEG): container finished" podID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerID="37cd6a35a06b4e735419d536ea3b4bb37c2cb0fd85ec7a942e4c53b1e68ca041" exitCode=0 Jan 23 08:32:23 crc kubenswrapper[4982]: I0123 08:32:23.625868 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22qb" event={"ID":"e39dd6b7-f306-4d42-ae41-600d78bc1a7c","Type":"ContainerDied","Data":"37cd6a35a06b4e735419d536ea3b4bb37c2cb0fd85ec7a942e4c53b1e68ca041"} Jan 23 08:32:24 crc kubenswrapper[4982]: I0123 08:32:24.635302 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerID="9eeba7c894d4afec0b8b1d8296bc6538db0174fd202995517ff72667adb43eb0" exitCode=0 Jan 23 08:32:24 crc kubenswrapper[4982]: I0123 08:32:24.635458 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhtxr" event={"ID":"7a223731-46ea-48c6-be91-12b24ac1fb9e","Type":"ContainerDied","Data":"9eeba7c894d4afec0b8b1d8296bc6538db0174fd202995517ff72667adb43eb0"} Jan 23 08:32:24 crc kubenswrapper[4982]: I0123 08:32:24.639208 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g55kb" event={"ID":"c2849345-c2ce-4763-aea0-62459d6fd213","Type":"ContainerStarted","Data":"54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af"} Jan 23 08:32:24 crc kubenswrapper[4982]: I0123 08:32:24.643222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22qb" event={"ID":"e39dd6b7-f306-4d42-ae41-600d78bc1a7c","Type":"ContainerStarted","Data":"0afbe2162dafd9664e78c382291e7c621f0c29c17fd4558950e3de1196f46f1e"} Jan 23 08:32:24 crc kubenswrapper[4982]: I0123 08:32:24.684646 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c22qb" podStartSLOduration=2.189504727 podStartE2EDuration="4.684603968s" podCreationTimestamp="2026-01-23 08:32:20 +0000 UTC" firstStartedPulling="2026-01-23 08:32:21.595671043 +0000 UTC m=+2216.076034429" lastFinishedPulling="2026-01-23 08:32:24.090770284 +0000 UTC m=+2218.571133670" observedRunningTime="2026-01-23 08:32:24.678067702 +0000 UTC m=+2219.158431088" watchObservedRunningTime="2026-01-23 08:32:24.684603968 +0000 UTC m=+2219.164967354" Jan 23 08:32:25 crc kubenswrapper[4982]: I0123 08:32:25.653544 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerID="43b407ebf875475a0870930eadea519d331659b9ccc8b4a127a1db84f4b61d87" exitCode=0 Jan 23 08:32:25 crc kubenswrapper[4982]: I0123 08:32:25.655252 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhtxr" event={"ID":"7a223731-46ea-48c6-be91-12b24ac1fb9e","Type":"ContainerDied","Data":"43b407ebf875475a0870930eadea519d331659b9ccc8b4a127a1db84f4b61d87"} Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.625256 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.626099 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.672371 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.700035 4982 generic.go:334] "Generic (PLEG): container finished" podID="c2849345-c2ce-4763-aea0-62459d6fd213" containerID="54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af" exitCode=0 Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.700120 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g55kb" event={"ID":"c2849345-c2ce-4763-aea0-62459d6fd213","Type":"ContainerDied","Data":"54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af"} Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.704868 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhtxr" event={"ID":"7a223731-46ea-48c6-be91-12b24ac1fb9e","Type":"ContainerStarted","Data":"cab0bb2e36af50dadd08f6c9417a050072c42c086f7579ee82acfd462edb3d7a"} Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.739535 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vhtxr" podStartSLOduration=4.054249265 podStartE2EDuration="8.739518885s" podCreationTimestamp="2026-01-23 08:32:22 +0000 UTC" firstStartedPulling="2026-01-23 08:32:24.63741682 +0000 UTC m=+2219.117780206" lastFinishedPulling="2026-01-23 08:32:29.32268644 +0000 UTC m=+2223.803049826" observedRunningTime="2026-01-23 08:32:30.737575573 +0000 UTC m=+2225.217938969" watchObservedRunningTime="2026-01-23 08:32:30.739518885 +0000 UTC m=+2225.219882271" Jan 23 08:32:30 crc kubenswrapper[4982]: I0123 08:32:30.758401 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:31 crc kubenswrapper[4982]: I0123 08:32:31.713826 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g55kb" event={"ID":"c2849345-c2ce-4763-aea0-62459d6fd213","Type":"ContainerStarted","Data":"807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d"} Jan 23 08:32:31 crc kubenswrapper[4982]: I0123 08:32:31.740081 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g55kb" podStartSLOduration=2.189533108 podStartE2EDuration="9.740059715s" podCreationTimestamp="2026-01-23 08:32:22 +0000 UTC" firstStartedPulling="2026-01-23 08:32:23.620820349 +0000 UTC m=+2218.101183735" lastFinishedPulling="2026-01-23 08:32:31.171346956 +0000 UTC m=+2225.651710342" observedRunningTime="2026-01-23 08:32:31.734835885 +0000 UTC m=+2226.215199281" watchObservedRunningTime="2026-01-23 08:32:31.740059715 +0000 UTC m=+2226.220423101" Jan 23 08:32:32 crc kubenswrapper[4982]: I0123 08:32:32.416590 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:32 crc kubenswrapper[4982]: I0123 08:32:32.416667 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:33 crc kubenswrapper[4982]: I0123 08:32:33.054573 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:33 crc kubenswrapper[4982]: I0123 08:32:33.055351 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:33 crc kubenswrapper[4982]: I0123 08:32:33.106420 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:33 crc kubenswrapper[4982]: I0123 08:32:33.466607 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g55kb" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="registry-server" probeResult="failure" output=< Jan 23 08:32:33 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 08:32:33 crc kubenswrapper[4982]: > Jan 23 08:32:35 crc kubenswrapper[4982]: I0123 08:32:35.273934 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c22qb"] Jan 23 08:32:35 crc kubenswrapper[4982]: I0123 08:32:35.274185 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c22qb" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="registry-server" containerID="cri-o://0afbe2162dafd9664e78c382291e7c621f0c29c17fd4558950e3de1196f46f1e" gracePeriod=2 Jan 23 08:32:37 crc kubenswrapper[4982]: I0123 08:32:37.779511 4982 generic.go:334] "Generic (PLEG): container finished" podID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerID="0afbe2162dafd9664e78c382291e7c621f0c29c17fd4558950e3de1196f46f1e" exitCode=0 Jan 23 08:32:37 crc kubenswrapper[4982]: I0123 08:32:37.779580 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22qb" event={"ID":"e39dd6b7-f306-4d42-ae41-600d78bc1a7c","Type":"ContainerDied","Data":"0afbe2162dafd9664e78c382291e7c621f0c29c17fd4558950e3de1196f46f1e"} Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.712438 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.790938 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c22qb" event={"ID":"e39dd6b7-f306-4d42-ae41-600d78bc1a7c","Type":"ContainerDied","Data":"ecc4b9ec1f73c2ad984cd03b828998d5bcbf0481b0cbeafcdd0240dd7a5aacc1"} Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.791015 4982 scope.go:117] "RemoveContainer" containerID="0afbe2162dafd9664e78c382291e7c621f0c29c17fd4558950e3de1196f46f1e" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.791086 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c22qb" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.812356 4982 scope.go:117] "RemoveContainer" containerID="37cd6a35a06b4e735419d536ea3b4bb37c2cb0fd85ec7a942e4c53b1e68ca041" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.830857 4982 scope.go:117] "RemoveContainer" containerID="d971e3da0195a8162acfa3a824e5407e0b74aba96b36cf1093722cc63e510787" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.833944 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-catalog-content\") pod \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.834096 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x65r9\" (UniqueName: \"kubernetes.io/projected/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-kube-api-access-x65r9\") pod \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.834136 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-utilities\") pod \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\" (UID: \"e39dd6b7-f306-4d42-ae41-600d78bc1a7c\") " Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.835156 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-utilities" (OuterVolumeSpecName: "utilities") pod "e39dd6b7-f306-4d42-ae41-600d78bc1a7c" (UID: "e39dd6b7-f306-4d42-ae41-600d78bc1a7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.840279 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-kube-api-access-x65r9" (OuterVolumeSpecName: "kube-api-access-x65r9") pod "e39dd6b7-f306-4d42-ae41-600d78bc1a7c" (UID: "e39dd6b7-f306-4d42-ae41-600d78bc1a7c"). InnerVolumeSpecName "kube-api-access-x65r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.863597 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e39dd6b7-f306-4d42-ae41-600d78bc1a7c" (UID: "e39dd6b7-f306-4d42-ae41-600d78bc1a7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.937077 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.937115 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x65r9\" (UniqueName: \"kubernetes.io/projected/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-kube-api-access-x65r9\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:38 crc kubenswrapper[4982]: I0123 08:32:38.937126 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39dd6b7-f306-4d42-ae41-600d78bc1a7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:39 crc kubenswrapper[4982]: I0123 08:32:39.130004 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c22qb"] Jan 23 08:32:39 crc kubenswrapper[4982]: I0123 08:32:39.138463 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c22qb"] Jan 23 08:32:39 crc kubenswrapper[4982]: I0123 08:32:39.837611 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" path="/var/lib/kubelet/pods/e39dd6b7-f306-4d42-ae41-600d78bc1a7c/volumes" Jan 23 08:32:41 crc kubenswrapper[4982]: I0123 08:32:41.435401 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:32:41 crc kubenswrapper[4982]: I0123 08:32:41.435458 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:32:42 crc kubenswrapper[4982]: I0123 08:32:42.466518 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:42 crc kubenswrapper[4982]: I0123 08:32:42.509408 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:43 crc kubenswrapper[4982]: I0123 08:32:43.102934 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:43 crc kubenswrapper[4982]: I0123 08:32:43.275580 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g55kb"] Jan 23 08:32:43 crc kubenswrapper[4982]: I0123 08:32:43.828182 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g55kb" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="registry-server" containerID="cri-o://807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d" gracePeriod=2 Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.230238 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.426454 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztzzk\" (UniqueName: \"kubernetes.io/projected/c2849345-c2ce-4763-aea0-62459d6fd213-kube-api-access-ztzzk\") pod \"c2849345-c2ce-4763-aea0-62459d6fd213\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.426597 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-catalog-content\") pod \"c2849345-c2ce-4763-aea0-62459d6fd213\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.426641 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-utilities\") pod \"c2849345-c2ce-4763-aea0-62459d6fd213\" (UID: \"c2849345-c2ce-4763-aea0-62459d6fd213\") " Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.428953 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-utilities" (OuterVolumeSpecName: "utilities") pod "c2849345-c2ce-4763-aea0-62459d6fd213" (UID: "c2849345-c2ce-4763-aea0-62459d6fd213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.433579 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2849345-c2ce-4763-aea0-62459d6fd213-kube-api-access-ztzzk" (OuterVolumeSpecName: "kube-api-access-ztzzk") pod "c2849345-c2ce-4763-aea0-62459d6fd213" (UID: "c2849345-c2ce-4763-aea0-62459d6fd213"). InnerVolumeSpecName "kube-api-access-ztzzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.541197 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.541783 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztzzk\" (UniqueName: \"kubernetes.io/projected/c2849345-c2ce-4763-aea0-62459d6fd213-kube-api-access-ztzzk\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.593881 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2849345-c2ce-4763-aea0-62459d6fd213" (UID: "c2849345-c2ce-4763-aea0-62459d6fd213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.643058 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2849345-c2ce-4763-aea0-62459d6fd213-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.840165 4982 generic.go:334] "Generic (PLEG): container finished" podID="c2849345-c2ce-4763-aea0-62459d6fd213" containerID="807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d" exitCode=0 Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.840213 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g55kb" event={"ID":"c2849345-c2ce-4763-aea0-62459d6fd213","Type":"ContainerDied","Data":"807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d"} Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.840240 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g55kb" event={"ID":"c2849345-c2ce-4763-aea0-62459d6fd213","Type":"ContainerDied","Data":"3082f6d26d8b34fda37e9ca62e5eedb74d6c5da9eb6da6f7be59ae619170beb9"} Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.840247 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g55kb" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.840258 4982 scope.go:117] "RemoveContainer" containerID="807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.865912 4982 scope.go:117] "RemoveContainer" containerID="54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.880092 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g55kb"] Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.898227 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g55kb"] Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.900481 4982 scope.go:117] "RemoveContainer" containerID="db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.921258 4982 scope.go:117] "RemoveContainer" containerID="807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d" Jan 23 08:32:44 crc kubenswrapper[4982]: E0123 08:32:44.921729 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d\": container with ID starting with 807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d not found: ID does not exist" containerID="807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.921759 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d"} err="failed to get container status \"807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d\": rpc error: code = NotFound desc = could not find container \"807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d\": container with ID starting with 807316aa8917c7a58258a3936013835c93433fe332fd08ed5cc494feefa0265d not found: ID does not exist" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.921781 4982 scope.go:117] "RemoveContainer" containerID="54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af" Jan 23 08:32:44 crc kubenswrapper[4982]: E0123 08:32:44.922179 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af\": container with ID starting with 54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af not found: ID does not exist" containerID="54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.922228 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af"} err="failed to get container status \"54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af\": rpc error: code = NotFound desc = could not find container \"54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af\": container with ID starting with 54bdc6f42f2aec68dab058c5d31e9e1d6cda1a7ac6ad88522797404264ad50af not found: ID does not exist" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.922263 4982 scope.go:117] "RemoveContainer" containerID="db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9" Jan 23 08:32:44 crc kubenswrapper[4982]: E0123 08:32:44.922542 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9\": container with ID starting with db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9 not found: ID does not exist" containerID="db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9" Jan 23 08:32:44 crc kubenswrapper[4982]: I0123 08:32:44.922564 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9"} err="failed to get container status \"db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9\": rpc error: code = NotFound desc = could not find container \"db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9\": container with ID starting with db6e1ba0c2cfb0f25e2013f0d0421dcab138dbfd6dc7d5f8aff460de322d02e9 not found: ID does not exist" Jan 23 08:32:45 crc kubenswrapper[4982]: I0123 08:32:45.473803 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhtxr"] Jan 23 08:32:45 crc kubenswrapper[4982]: I0123 08:32:45.474329 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vhtxr" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="registry-server" containerID="cri-o://cab0bb2e36af50dadd08f6c9417a050072c42c086f7579ee82acfd462edb3d7a" gracePeriod=2 Jan 23 08:32:45 crc kubenswrapper[4982]: I0123 08:32:45.849010 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" path="/var/lib/kubelet/pods/c2849345-c2ce-4763-aea0-62459d6fd213/volumes" Jan 23 08:32:45 crc kubenswrapper[4982]: I0123 08:32:45.849806 4982 generic.go:334] "Generic (PLEG): container finished" podID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerID="cab0bb2e36af50dadd08f6c9417a050072c42c086f7579ee82acfd462edb3d7a" exitCode=0 Jan 23 08:32:45 crc kubenswrapper[4982]: I0123 08:32:45.849841 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhtxr" event={"ID":"7a223731-46ea-48c6-be91-12b24ac1fb9e","Type":"ContainerDied","Data":"cab0bb2e36af50dadd08f6c9417a050072c42c086f7579ee82acfd462edb3d7a"} Jan 23 08:32:45 crc kubenswrapper[4982]: I0123 08:32:45.990203 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.167902 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-catalog-content\") pod \"7a223731-46ea-48c6-be91-12b24ac1fb9e\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.168032 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-utilities\") pod \"7a223731-46ea-48c6-be91-12b24ac1fb9e\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.168094 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl95s\" (UniqueName: \"kubernetes.io/projected/7a223731-46ea-48c6-be91-12b24ac1fb9e-kube-api-access-nl95s\") pod \"7a223731-46ea-48c6-be91-12b24ac1fb9e\" (UID: \"7a223731-46ea-48c6-be91-12b24ac1fb9e\") " Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.168937 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-utilities" (OuterVolumeSpecName: "utilities") pod "7a223731-46ea-48c6-be91-12b24ac1fb9e" (UID: "7a223731-46ea-48c6-be91-12b24ac1fb9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.173993 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a223731-46ea-48c6-be91-12b24ac1fb9e-kube-api-access-nl95s" (OuterVolumeSpecName: "kube-api-access-nl95s") pod "7a223731-46ea-48c6-be91-12b24ac1fb9e" (UID: "7a223731-46ea-48c6-be91-12b24ac1fb9e"). InnerVolumeSpecName "kube-api-access-nl95s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.220768 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a223731-46ea-48c6-be91-12b24ac1fb9e" (UID: "7a223731-46ea-48c6-be91-12b24ac1fb9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.269924 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.269962 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl95s\" (UniqueName: \"kubernetes.io/projected/7a223731-46ea-48c6-be91-12b24ac1fb9e-kube-api-access-nl95s\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.269972 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a223731-46ea-48c6-be91-12b24ac1fb9e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.861225 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhtxr" event={"ID":"7a223731-46ea-48c6-be91-12b24ac1fb9e","Type":"ContainerDied","Data":"6fb8a6c85d6ac2435bba62fdcdc86d1d355fbfb59d02046df96547e0eac0e863"} Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.861287 4982 scope.go:117] "RemoveContainer" containerID="cab0bb2e36af50dadd08f6c9417a050072c42c086f7579ee82acfd462edb3d7a" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.862239 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhtxr" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.880488 4982 scope.go:117] "RemoveContainer" containerID="43b407ebf875475a0870930eadea519d331659b9ccc8b4a127a1db84f4b61d87" Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.898185 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhtxr"] Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.904587 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vhtxr"] Jan 23 08:32:46 crc kubenswrapper[4982]: I0123 08:32:46.918608 4982 scope.go:117] "RemoveContainer" containerID="9eeba7c894d4afec0b8b1d8296bc6538db0174fd202995517ff72667adb43eb0" Jan 23 08:32:47 crc kubenswrapper[4982]: I0123 08:32:47.837381 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" path="/var/lib/kubelet/pods/7a223731-46ea-48c6-be91-12b24ac1fb9e/volumes" Jan 23 08:33:11 crc kubenswrapper[4982]: I0123 08:33:11.435775 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:33:11 crc kubenswrapper[4982]: I0123 08:33:11.437511 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:33:41 crc kubenswrapper[4982]: I0123 08:33:41.436042 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:33:41 crc kubenswrapper[4982]: I0123 08:33:41.436687 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:33:41 crc kubenswrapper[4982]: I0123 08:33:41.436738 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:33:41 crc kubenswrapper[4982]: I0123 08:33:41.437562 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:33:41 crc kubenswrapper[4982]: I0123 08:33:41.437642 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" gracePeriod=600 Jan 23 08:33:41 crc kubenswrapper[4982]: E0123 08:33:41.561701 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:33:42 crc kubenswrapper[4982]: I0123 08:33:42.329361 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" exitCode=0 Jan 23 08:33:42 crc kubenswrapper[4982]: I0123 08:33:42.329436 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2"} Jan 23 08:33:42 crc kubenswrapper[4982]: I0123 08:33:42.329858 4982 scope.go:117] "RemoveContainer" containerID="7f74b2e936a3bf06df20c5190605703473a6b63fcc87f6e2b4a4ba23182c04d8" Jan 23 08:33:42 crc kubenswrapper[4982]: I0123 08:33:42.330417 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:33:42 crc kubenswrapper[4982]: E0123 08:33:42.330696 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:33:56 crc kubenswrapper[4982]: I0123 08:33:56.826996 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:33:56 crc kubenswrapper[4982]: E0123 08:33:56.827917 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:34:08 crc kubenswrapper[4982]: I0123 08:34:08.827595 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:34:08 crc kubenswrapper[4982]: E0123 08:34:08.828413 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:34:19 crc kubenswrapper[4982]: I0123 08:34:19.827790 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:34:19 crc kubenswrapper[4982]: E0123 08:34:19.829726 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:34:33 crc kubenswrapper[4982]: I0123 08:34:33.827090 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:34:33 crc kubenswrapper[4982]: E0123 08:34:33.827850 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:34:45 crc kubenswrapper[4982]: I0123 08:34:45.832674 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:34:45 crc kubenswrapper[4982]: E0123 08:34:45.833417 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:34:59 crc kubenswrapper[4982]: I0123 08:34:59.835139 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:34:59 crc kubenswrapper[4982]: E0123 08:34:59.835982 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:35:10 crc kubenswrapper[4982]: I0123 08:35:10.828833 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:35:10 crc kubenswrapper[4982]: E0123 08:35:10.829725 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:35:25 crc kubenswrapper[4982]: I0123 08:35:25.832107 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:35:25 crc kubenswrapper[4982]: E0123 08:35:25.833138 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:35:40 crc kubenswrapper[4982]: I0123 08:35:40.827612 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:35:40 crc kubenswrapper[4982]: E0123 08:35:40.828700 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:35:53 crc kubenswrapper[4982]: I0123 08:35:53.827640 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:35:53 crc kubenswrapper[4982]: E0123 08:35:53.828434 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:36:07 crc kubenswrapper[4982]: I0123 08:36:07.828723 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:36:07 crc kubenswrapper[4982]: E0123 08:36:07.829643 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:36:19 crc kubenswrapper[4982]: I0123 08:36:19.827330 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:36:19 crc kubenswrapper[4982]: E0123 08:36:19.828358 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:36:30 crc kubenswrapper[4982]: I0123 08:36:30.827312 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:36:30 crc kubenswrapper[4982]: E0123 08:36:30.828262 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:36:44 crc kubenswrapper[4982]: I0123 08:36:44.828031 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:36:44 crc kubenswrapper[4982]: E0123 08:36:44.830076 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:36:56 crc kubenswrapper[4982]: I0123 08:36:56.827708 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:36:56 crc kubenswrapper[4982]: E0123 08:36:56.828611 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:37:07 crc kubenswrapper[4982]: I0123 08:37:07.827209 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:37:07 crc kubenswrapper[4982]: E0123 08:37:07.828282 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:37:18 crc kubenswrapper[4982]: I0123 08:37:18.827480 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:37:18 crc kubenswrapper[4982]: E0123 08:37:18.828396 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:37:30 crc kubenswrapper[4982]: I0123 08:37:30.827825 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:37:30 crc kubenswrapper[4982]: E0123 08:37:30.828564 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:37:44 crc kubenswrapper[4982]: I0123 08:37:44.828175 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:37:44 crc kubenswrapper[4982]: E0123 08:37:44.830271 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:37:55 crc kubenswrapper[4982]: I0123 08:37:55.832037 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:37:55 crc kubenswrapper[4982]: E0123 08:37:55.832743 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:38:10 crc kubenswrapper[4982]: I0123 08:38:10.827866 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:38:10 crc kubenswrapper[4982]: E0123 08:38:10.828709 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:38:21 crc kubenswrapper[4982]: I0123 08:38:21.827326 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:38:21 crc kubenswrapper[4982]: E0123 08:38:21.828241 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:38:34 crc kubenswrapper[4982]: I0123 08:38:34.827570 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:38:34 crc kubenswrapper[4982]: E0123 08:38:34.828902 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:38:48 crc kubenswrapper[4982]: I0123 08:38:48.826876 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:38:49 crc kubenswrapper[4982]: I0123 08:38:49.816031 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"0c1e571d0b23feda2547c5e8ed8474731ee4de013907e867fabbfef49746d7eb"} Jan 23 08:41:11 crc kubenswrapper[4982]: I0123 08:41:11.435885 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:41:11 crc kubenswrapper[4982]: I0123 08:41:11.436587 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.450714 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ks4d8"] Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451791 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="extract-utilities" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451812 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="extract-utilities" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451826 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="extract-utilities" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451834 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="extract-utilities" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451848 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451858 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451868 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="extract-content" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451876 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="extract-content" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451897 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451906 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451917 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="extract-content" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451924 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="extract-content" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451935 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="extract-content" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451943 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="extract-content" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451973 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="extract-utilities" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451981 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="extract-utilities" Jan 23 08:41:30 crc kubenswrapper[4982]: E0123 08:41:30.451992 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.451998 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.452194 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a223731-46ea-48c6-be91-12b24ac1fb9e" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.452210 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2849345-c2ce-4763-aea0-62459d6fd213" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.452241 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39dd6b7-f306-4d42-ae41-600d78bc1a7c" containerName="registry-server" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.454417 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.460585 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks4d8"] Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.569148 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw752\" (UniqueName: \"kubernetes.io/projected/837e58e5-5edb-427d-8264-47f2c462a6c2-kube-api-access-nw752\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.569409 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-catalog-content\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.569506 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-utilities\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.670520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw752\" (UniqueName: \"kubernetes.io/projected/837e58e5-5edb-427d-8264-47f2c462a6c2-kube-api-access-nw752\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.670617 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-catalog-content\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.670726 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-utilities\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.671356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-catalog-content\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.671382 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-utilities\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.692519 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw752\" (UniqueName: \"kubernetes.io/projected/837e58e5-5edb-427d-8264-47f2c462a6c2-kube-api-access-nw752\") pod \"certified-operators-ks4d8\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:30 crc kubenswrapper[4982]: I0123 08:41:30.776422 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:31 crc kubenswrapper[4982]: I0123 08:41:31.284870 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ks4d8"] Jan 23 08:41:32 crc kubenswrapper[4982]: I0123 08:41:32.214663 4982 generic.go:334] "Generic (PLEG): container finished" podID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerID="d41bded6cad2223425362f57cb54b659de3d817a749f25b39541ff87cf9ed3ee" exitCode=0 Jan 23 08:41:32 crc kubenswrapper[4982]: I0123 08:41:32.214846 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks4d8" event={"ID":"837e58e5-5edb-427d-8264-47f2c462a6c2","Type":"ContainerDied","Data":"d41bded6cad2223425362f57cb54b659de3d817a749f25b39541ff87cf9ed3ee"} Jan 23 08:41:32 crc kubenswrapper[4982]: I0123 08:41:32.215090 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks4d8" event={"ID":"837e58e5-5edb-427d-8264-47f2c462a6c2","Type":"ContainerStarted","Data":"f0d8665f277761c61274e2941a5ff3d1f8931c0f23ba5632fc32c6cf02f12b26"} Jan 23 08:41:32 crc kubenswrapper[4982]: I0123 08:41:32.217292 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:41:33 crc kubenswrapper[4982]: I0123 08:41:33.222682 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks4d8" event={"ID":"837e58e5-5edb-427d-8264-47f2c462a6c2","Type":"ContainerStarted","Data":"5962c028acb1cfc6ad1dba1cee83d3012bccca1a3ef78c01330398106a9391aa"} Jan 23 08:41:34 crc kubenswrapper[4982]: I0123 08:41:34.235418 4982 generic.go:334] "Generic (PLEG): container finished" podID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerID="5962c028acb1cfc6ad1dba1cee83d3012bccca1a3ef78c01330398106a9391aa" exitCode=0 Jan 23 08:41:34 crc kubenswrapper[4982]: I0123 08:41:34.235473 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks4d8" event={"ID":"837e58e5-5edb-427d-8264-47f2c462a6c2","Type":"ContainerDied","Data":"5962c028acb1cfc6ad1dba1cee83d3012bccca1a3ef78c01330398106a9391aa"} Jan 23 08:41:35 crc kubenswrapper[4982]: I0123 08:41:35.246470 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks4d8" event={"ID":"837e58e5-5edb-427d-8264-47f2c462a6c2","Type":"ContainerStarted","Data":"ac5665981a16fb3ddb148b7543b9f2af91d51dc0405d98bc336b4499c6335138"} Jan 23 08:41:35 crc kubenswrapper[4982]: I0123 08:41:35.267211 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ks4d8" podStartSLOduration=2.680716632 podStartE2EDuration="5.26719265s" podCreationTimestamp="2026-01-23 08:41:30 +0000 UTC" firstStartedPulling="2026-01-23 08:41:32.217047116 +0000 UTC m=+2766.697410502" lastFinishedPulling="2026-01-23 08:41:34.803523144 +0000 UTC m=+2769.283886520" observedRunningTime="2026-01-23 08:41:35.263576369 +0000 UTC m=+2769.743939785" watchObservedRunningTime="2026-01-23 08:41:35.26719265 +0000 UTC m=+2769.747556056" Jan 23 08:41:40 crc kubenswrapper[4982]: I0123 08:41:40.777479 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:40 crc kubenswrapper[4982]: I0123 08:41:40.778192 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:40 crc kubenswrapper[4982]: I0123 08:41:40.827105 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:41 crc kubenswrapper[4982]: I0123 08:41:41.343031 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:41 crc kubenswrapper[4982]: I0123 08:41:41.392874 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks4d8"] Jan 23 08:41:41 crc kubenswrapper[4982]: I0123 08:41:41.437105 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:41:41 crc kubenswrapper[4982]: I0123 08:41:41.437230 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:41:43 crc kubenswrapper[4982]: I0123 08:41:43.316664 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ks4d8" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="registry-server" containerID="cri-o://ac5665981a16fb3ddb148b7543b9f2af91d51dc0405d98bc336b4499c6335138" gracePeriod=2 Jan 23 08:41:44 crc kubenswrapper[4982]: I0123 08:41:44.325681 4982 generic.go:334] "Generic (PLEG): container finished" podID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerID="ac5665981a16fb3ddb148b7543b9f2af91d51dc0405d98bc336b4499c6335138" exitCode=0 Jan 23 08:41:44 crc kubenswrapper[4982]: I0123 08:41:44.325777 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks4d8" event={"ID":"837e58e5-5edb-427d-8264-47f2c462a6c2","Type":"ContainerDied","Data":"ac5665981a16fb3ddb148b7543b9f2af91d51dc0405d98bc336b4499c6335138"} Jan 23 08:41:44 crc kubenswrapper[4982]: I0123 08:41:44.873311 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.004239 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw752\" (UniqueName: \"kubernetes.io/projected/837e58e5-5edb-427d-8264-47f2c462a6c2-kube-api-access-nw752\") pod \"837e58e5-5edb-427d-8264-47f2c462a6c2\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.004391 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-utilities\") pod \"837e58e5-5edb-427d-8264-47f2c462a6c2\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.004466 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-catalog-content\") pod \"837e58e5-5edb-427d-8264-47f2c462a6c2\" (UID: \"837e58e5-5edb-427d-8264-47f2c462a6c2\") " Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.006347 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-utilities" (OuterVolumeSpecName: "utilities") pod "837e58e5-5edb-427d-8264-47f2c462a6c2" (UID: "837e58e5-5edb-427d-8264-47f2c462a6c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.006704 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.016961 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837e58e5-5edb-427d-8264-47f2c462a6c2-kube-api-access-nw752" (OuterVolumeSpecName: "kube-api-access-nw752") pod "837e58e5-5edb-427d-8264-47f2c462a6c2" (UID: "837e58e5-5edb-427d-8264-47f2c462a6c2"). InnerVolumeSpecName "kube-api-access-nw752". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.058578 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "837e58e5-5edb-427d-8264-47f2c462a6c2" (UID: "837e58e5-5edb-427d-8264-47f2c462a6c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.107912 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837e58e5-5edb-427d-8264-47f2c462a6c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.107941 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw752\" (UniqueName: \"kubernetes.io/projected/837e58e5-5edb-427d-8264-47f2c462a6c2-kube-api-access-nw752\") on node \"crc\" DevicePath \"\"" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.341212 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ks4d8" event={"ID":"837e58e5-5edb-427d-8264-47f2c462a6c2","Type":"ContainerDied","Data":"f0d8665f277761c61274e2941a5ff3d1f8931c0f23ba5632fc32c6cf02f12b26"} Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.341259 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ks4d8" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.341271 4982 scope.go:117] "RemoveContainer" containerID="ac5665981a16fb3ddb148b7543b9f2af91d51dc0405d98bc336b4499c6335138" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.376094 4982 scope.go:117] "RemoveContainer" containerID="5962c028acb1cfc6ad1dba1cee83d3012bccca1a3ef78c01330398106a9391aa" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.394464 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ks4d8"] Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.403576 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ks4d8"] Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.406760 4982 scope.go:117] "RemoveContainer" containerID="d41bded6cad2223425362f57cb54b659de3d817a749f25b39541ff87cf9ed3ee" Jan 23 08:41:45 crc kubenswrapper[4982]: I0123 08:41:45.837693 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" path="/var/lib/kubelet/pods/837e58e5-5edb-427d-8264-47f2c462a6c2/volumes" Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.436747 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.437220 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.437274 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.438119 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c1e571d0b23feda2547c5e8ed8474731ee4de013907e867fabbfef49746d7eb"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.438172 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://0c1e571d0b23feda2547c5e8ed8474731ee4de013907e867fabbfef49746d7eb" gracePeriod=600 Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.564329 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="0c1e571d0b23feda2547c5e8ed8474731ee4de013907e867fabbfef49746d7eb" exitCode=0 Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.564413 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"0c1e571d0b23feda2547c5e8ed8474731ee4de013907e867fabbfef49746d7eb"} Jan 23 08:42:11 crc kubenswrapper[4982]: I0123 08:42:11.564567 4982 scope.go:117] "RemoveContainer" containerID="b4738b9fe9d0d580dd935234b90bc0c9ab50e798e00bb2683510f905e2f107d2" Jan 23 08:42:12 crc kubenswrapper[4982]: I0123 08:42:12.577581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736"} Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.045822 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdzhk"] Jan 23 08:42:42 crc kubenswrapper[4982]: E0123 08:42:42.046786 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="extract-utilities" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.046801 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="extract-utilities" Jan 23 08:42:42 crc kubenswrapper[4982]: E0123 08:42:42.046833 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="extract-content" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.046839 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="extract-content" Jan 23 08:42:42 crc kubenswrapper[4982]: E0123 08:42:42.046858 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="registry-server" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.046865 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="registry-server" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.047032 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="837e58e5-5edb-427d-8264-47f2c462a6c2" containerName="registry-server" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.048320 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.058967 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdzhk"] Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.196852 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-utilities\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.196942 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-catalog-content\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.197067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67td8\" (UniqueName: \"kubernetes.io/projected/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-kube-api-access-67td8\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.298282 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67td8\" (UniqueName: \"kubernetes.io/projected/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-kube-api-access-67td8\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.298420 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-utilities\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.298485 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-catalog-content\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.298932 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-catalog-content\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.299059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-utilities\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.315563 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67td8\" (UniqueName: \"kubernetes.io/projected/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-kube-api-access-67td8\") pod \"redhat-operators-cdzhk\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.369740 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.829997 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdzhk"] Jan 23 08:42:42 crc kubenswrapper[4982]: I0123 08:42:42.870863 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdzhk" event={"ID":"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc","Type":"ContainerStarted","Data":"5066c1c77104c8c962d92e3ad03936db7847ac31d17dc1c2e57cc8d99842eb41"} Jan 23 08:42:43 crc kubenswrapper[4982]: I0123 08:42:43.877888 4982 generic.go:334] "Generic (PLEG): container finished" podID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerID="0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4" exitCode=0 Jan 23 08:42:43 crc kubenswrapper[4982]: I0123 08:42:43.877967 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdzhk" event={"ID":"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc","Type":"ContainerDied","Data":"0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4"} Jan 23 08:42:44 crc kubenswrapper[4982]: I0123 08:42:44.890251 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdzhk" event={"ID":"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc","Type":"ContainerStarted","Data":"4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215"} Jan 23 08:42:45 crc kubenswrapper[4982]: I0123 08:42:45.899554 4982 generic.go:334] "Generic (PLEG): container finished" podID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerID="4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215" exitCode=0 Jan 23 08:42:45 crc kubenswrapper[4982]: I0123 08:42:45.899607 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdzhk" event={"ID":"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc","Type":"ContainerDied","Data":"4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215"} Jan 23 08:42:46 crc kubenswrapper[4982]: I0123 08:42:46.908157 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdzhk" event={"ID":"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc","Type":"ContainerStarted","Data":"3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210"} Jan 23 08:42:46 crc kubenswrapper[4982]: I0123 08:42:46.952558 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdzhk" podStartSLOduration=2.482711836 podStartE2EDuration="4.952390751s" podCreationTimestamp="2026-01-23 08:42:42 +0000 UTC" firstStartedPulling="2026-01-23 08:42:43.879705329 +0000 UTC m=+2838.360068725" lastFinishedPulling="2026-01-23 08:42:46.349384254 +0000 UTC m=+2840.829747640" observedRunningTime="2026-01-23 08:42:46.942525957 +0000 UTC m=+2841.422889383" watchObservedRunningTime="2026-01-23 08:42:46.952390751 +0000 UTC m=+2841.432754137" Jan 23 08:42:52 crc kubenswrapper[4982]: I0123 08:42:52.370156 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:52 crc kubenswrapper[4982]: I0123 08:42:52.370729 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:52 crc kubenswrapper[4982]: I0123 08:42:52.412416 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:53 crc kubenswrapper[4982]: I0123 08:42:53.002398 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:53 crc kubenswrapper[4982]: I0123 08:42:53.066420 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdzhk"] Jan 23 08:42:54 crc kubenswrapper[4982]: I0123 08:42:54.965835 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdzhk" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="registry-server" containerID="cri-o://3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210" gracePeriod=2 Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.357312 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.400496 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-utilities\") pod \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.400547 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67td8\" (UniqueName: \"kubernetes.io/projected/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-kube-api-access-67td8\") pod \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.400602 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-catalog-content\") pod \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\" (UID: \"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc\") " Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.402255 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-utilities" (OuterVolumeSpecName: "utilities") pod "a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" (UID: "a8fdafa3-e790-4e08-95ae-0f4f8a848ddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.413091 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-kube-api-access-67td8" (OuterVolumeSpecName: "kube-api-access-67td8") pod "a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" (UID: "a8fdafa3-e790-4e08-95ae-0f4f8a848ddc"). InnerVolumeSpecName "kube-api-access-67td8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.504116 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.504154 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67td8\" (UniqueName: \"kubernetes.io/projected/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-kube-api-access-67td8\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.974750 4982 generic.go:334] "Generic (PLEG): container finished" podID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerID="3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210" exitCode=0 Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.974830 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdzhk" Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.974833 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdzhk" event={"ID":"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc","Type":"ContainerDied","Data":"3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210"} Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.975846 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdzhk" event={"ID":"a8fdafa3-e790-4e08-95ae-0f4f8a848ddc","Type":"ContainerDied","Data":"5066c1c77104c8c962d92e3ad03936db7847ac31d17dc1c2e57cc8d99842eb41"} Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.975870 4982 scope.go:117] "RemoveContainer" containerID="3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210" Jan 23 08:42:55 crc kubenswrapper[4982]: I0123 08:42:55.996492 4982 scope.go:117] "RemoveContainer" containerID="4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.019160 4982 scope.go:117] "RemoveContainer" containerID="0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.045437 4982 scope.go:117] "RemoveContainer" containerID="3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210" Jan 23 08:42:56 crc kubenswrapper[4982]: E0123 08:42:56.045856 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210\": container with ID starting with 3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210 not found: ID does not exist" containerID="3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.045900 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210"} err="failed to get container status \"3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210\": rpc error: code = NotFound desc = could not find container \"3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210\": container with ID starting with 3be986377b6da7b36a2eb848f946df5da219851050267561661974758dece210 not found: ID does not exist" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.045924 4982 scope.go:117] "RemoveContainer" containerID="4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215" Jan 23 08:42:56 crc kubenswrapper[4982]: E0123 08:42:56.046245 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215\": container with ID starting with 4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215 not found: ID does not exist" containerID="4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.046293 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215"} err="failed to get container status \"4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215\": rpc error: code = NotFound desc = could not find container \"4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215\": container with ID starting with 4ed067c0ae0bae7f0bccea6cd0558142eb61e3286fdbc0f1027da6aa4848e215 not found: ID does not exist" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.046329 4982 scope.go:117] "RemoveContainer" containerID="0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4" Jan 23 08:42:56 crc kubenswrapper[4982]: E0123 08:42:56.046949 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4\": container with ID starting with 0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4 not found: ID does not exist" containerID="0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.046981 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4"} err="failed to get container status \"0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4\": rpc error: code = NotFound desc = could not find container \"0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4\": container with ID starting with 0e4833a8a0f5c24d6a3f2b2518b8400434489f0100af14dd0c952b8561872bb4 not found: ID does not exist" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.477458 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" (UID: "a8fdafa3-e790-4e08-95ae-0f4f8a848ddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.520202 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.624176 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdzhk"] Jan 23 08:42:56 crc kubenswrapper[4982]: I0123 08:42:56.634062 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdzhk"] Jan 23 08:42:57 crc kubenswrapper[4982]: I0123 08:42:57.840481 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" path="/var/lib/kubelet/pods/a8fdafa3-e790-4e08-95ae-0f4f8a848ddc/volumes" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.053187 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2dvvn"] Jan 23 08:43:32 crc kubenswrapper[4982]: E0123 08:43:32.054039 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="extract-content" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.054054 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="extract-content" Jan 23 08:43:32 crc kubenswrapper[4982]: E0123 08:43:32.054074 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="registry-server" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.054080 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="registry-server" Jan 23 08:43:32 crc kubenswrapper[4982]: E0123 08:43:32.054096 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="extract-utilities" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.054101 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="extract-utilities" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.054273 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fdafa3-e790-4e08-95ae-0f4f8a848ddc" containerName="registry-server" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.055436 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.090708 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dvvn"] Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.110978 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hq8j\" (UniqueName: \"kubernetes.io/projected/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-kube-api-access-5hq8j\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.111025 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-catalog-content\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.111215 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-utilities\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.212829 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hq8j\" (UniqueName: \"kubernetes.io/projected/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-kube-api-access-5hq8j\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.212891 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-catalog-content\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.212981 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-utilities\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.213451 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-catalog-content\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.213530 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-utilities\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.232428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hq8j\" (UniqueName: \"kubernetes.io/projected/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-kube-api-access-5hq8j\") pod \"community-operators-2dvvn\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.377118 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:32 crc kubenswrapper[4982]: I0123 08:43:32.750111 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dvvn"] Jan 23 08:43:33 crc kubenswrapper[4982]: I0123 08:43:33.297265 4982 generic.go:334] "Generic (PLEG): container finished" podID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerID="832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa" exitCode=0 Jan 23 08:43:33 crc kubenswrapper[4982]: I0123 08:43:33.297392 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dvvn" event={"ID":"0e4fa02a-ab02-4908-9cb5-6221a19a99c1","Type":"ContainerDied","Data":"832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa"} Jan 23 08:43:33 crc kubenswrapper[4982]: I0123 08:43:33.298665 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dvvn" event={"ID":"0e4fa02a-ab02-4908-9cb5-6221a19a99c1","Type":"ContainerStarted","Data":"442ec52bdc1c6e425d5ac0e7fdb80db8d9f103ba5f883803422959e5aef3283d"} Jan 23 08:43:34 crc kubenswrapper[4982]: I0123 08:43:34.306895 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dvvn" event={"ID":"0e4fa02a-ab02-4908-9cb5-6221a19a99c1","Type":"ContainerStarted","Data":"c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0"} Jan 23 08:43:35 crc kubenswrapper[4982]: I0123 08:43:35.319378 4982 generic.go:334] "Generic (PLEG): container finished" podID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerID="c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0" exitCode=0 Jan 23 08:43:35 crc kubenswrapper[4982]: I0123 08:43:35.319429 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dvvn" event={"ID":"0e4fa02a-ab02-4908-9cb5-6221a19a99c1","Type":"ContainerDied","Data":"c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0"} Jan 23 08:43:36 crc kubenswrapper[4982]: I0123 08:43:36.330335 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dvvn" event={"ID":"0e4fa02a-ab02-4908-9cb5-6221a19a99c1","Type":"ContainerStarted","Data":"4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39"} Jan 23 08:43:36 crc kubenswrapper[4982]: I0123 08:43:36.349607 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2dvvn" podStartSLOduration=1.9380179709999998 podStartE2EDuration="4.349583368s" podCreationTimestamp="2026-01-23 08:43:32 +0000 UTC" firstStartedPulling="2026-01-23 08:43:33.30005019 +0000 UTC m=+2887.780413576" lastFinishedPulling="2026-01-23 08:43:35.711615587 +0000 UTC m=+2890.191978973" observedRunningTime="2026-01-23 08:43:36.349323321 +0000 UTC m=+2890.829686727" watchObservedRunningTime="2026-01-23 08:43:36.349583368 +0000 UTC m=+2890.829946754" Jan 23 08:43:39 crc kubenswrapper[4982]: I0123 08:43:39.846242 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wz2t6"] Jan 23 08:43:39 crc kubenswrapper[4982]: I0123 08:43:39.848171 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:39 crc kubenswrapper[4982]: I0123 08:43:39.850389 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz2t6"] Jan 23 08:43:39 crc kubenswrapper[4982]: I0123 08:43:39.966958 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-utilities\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:39 crc kubenswrapper[4982]: I0123 08:43:39.967276 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffr55\" (UniqueName: \"kubernetes.io/projected/25815327-592c-4b4a-952e-18ddc9f24dff-kube-api-access-ffr55\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:39 crc kubenswrapper[4982]: I0123 08:43:39.967308 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-catalog-content\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.068890 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-utilities\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.068947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffr55\" (UniqueName: \"kubernetes.io/projected/25815327-592c-4b4a-952e-18ddc9f24dff-kube-api-access-ffr55\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.068983 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-catalog-content\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.069500 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-utilities\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.069530 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-catalog-content\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.102710 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffr55\" (UniqueName: \"kubernetes.io/projected/25815327-592c-4b4a-952e-18ddc9f24dff-kube-api-access-ffr55\") pod \"redhat-marketplace-wz2t6\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.180209 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:40 crc kubenswrapper[4982]: I0123 08:43:40.686299 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz2t6"] Jan 23 08:43:40 crc kubenswrapper[4982]: W0123 08:43:40.688999 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25815327_592c_4b4a_952e_18ddc9f24dff.slice/crio-f8a16ab34df27c824373f3c7ed6ef6e1bff1ed30eaa0f27ae4cebd4953a2cc5e WatchSource:0}: Error finding container f8a16ab34df27c824373f3c7ed6ef6e1bff1ed30eaa0f27ae4cebd4953a2cc5e: Status 404 returned error can't find the container with id f8a16ab34df27c824373f3c7ed6ef6e1bff1ed30eaa0f27ae4cebd4953a2cc5e Jan 23 08:43:41 crc kubenswrapper[4982]: I0123 08:43:41.378301 4982 generic.go:334] "Generic (PLEG): container finished" podID="25815327-592c-4b4a-952e-18ddc9f24dff" containerID="d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d" exitCode=0 Jan 23 08:43:41 crc kubenswrapper[4982]: I0123 08:43:41.378350 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz2t6" event={"ID":"25815327-592c-4b4a-952e-18ddc9f24dff","Type":"ContainerDied","Data":"d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d"} Jan 23 08:43:41 crc kubenswrapper[4982]: I0123 08:43:41.378375 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz2t6" event={"ID":"25815327-592c-4b4a-952e-18ddc9f24dff","Type":"ContainerStarted","Data":"f8a16ab34df27c824373f3c7ed6ef6e1bff1ed30eaa0f27ae4cebd4953a2cc5e"} Jan 23 08:43:42 crc kubenswrapper[4982]: I0123 08:43:42.377391 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:42 crc kubenswrapper[4982]: I0123 08:43:42.377990 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:42 crc kubenswrapper[4982]: I0123 08:43:42.389855 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz2t6" event={"ID":"25815327-592c-4b4a-952e-18ddc9f24dff","Type":"ContainerStarted","Data":"d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411"} Jan 23 08:43:42 crc kubenswrapper[4982]: I0123 08:43:42.427343 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:43 crc kubenswrapper[4982]: I0123 08:43:43.399881 4982 generic.go:334] "Generic (PLEG): container finished" podID="25815327-592c-4b4a-952e-18ddc9f24dff" containerID="d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411" exitCode=0 Jan 23 08:43:43 crc kubenswrapper[4982]: I0123 08:43:43.399999 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz2t6" event={"ID":"25815327-592c-4b4a-952e-18ddc9f24dff","Type":"ContainerDied","Data":"d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411"} Jan 23 08:43:43 crc kubenswrapper[4982]: I0123 08:43:43.455866 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:44 crc kubenswrapper[4982]: I0123 08:43:44.412427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz2t6" event={"ID":"25815327-592c-4b4a-952e-18ddc9f24dff","Type":"ContainerStarted","Data":"b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96"} Jan 23 08:43:44 crc kubenswrapper[4982]: I0123 08:43:44.432600 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wz2t6" podStartSLOduration=2.8707871369999998 podStartE2EDuration="5.432583968s" podCreationTimestamp="2026-01-23 08:43:39 +0000 UTC" firstStartedPulling="2026-01-23 08:43:41.38087658 +0000 UTC m=+2895.861239966" lastFinishedPulling="2026-01-23 08:43:43.942673411 +0000 UTC m=+2898.423036797" observedRunningTime="2026-01-23 08:43:44.428313079 +0000 UTC m=+2898.908676465" watchObservedRunningTime="2026-01-23 08:43:44.432583968 +0000 UTC m=+2898.912947354" Jan 23 08:43:44 crc kubenswrapper[4982]: I0123 08:43:44.830096 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dvvn"] Jan 23 08:43:45 crc kubenswrapper[4982]: I0123 08:43:45.419124 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2dvvn" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="registry-server" containerID="cri-o://4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39" gracePeriod=2 Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.293964 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.367158 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hq8j\" (UniqueName: \"kubernetes.io/projected/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-kube-api-access-5hq8j\") pod \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.367367 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-utilities\") pod \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.367420 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-catalog-content\") pod \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\" (UID: \"0e4fa02a-ab02-4908-9cb5-6221a19a99c1\") " Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.373448 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-utilities" (OuterVolumeSpecName: "utilities") pod "0e4fa02a-ab02-4908-9cb5-6221a19a99c1" (UID: "0e4fa02a-ab02-4908-9cb5-6221a19a99c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.377317 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-kube-api-access-5hq8j" (OuterVolumeSpecName: "kube-api-access-5hq8j") pod "0e4fa02a-ab02-4908-9cb5-6221a19a99c1" (UID: "0e4fa02a-ab02-4908-9cb5-6221a19a99c1"). InnerVolumeSpecName "kube-api-access-5hq8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.469686 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.469717 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hq8j\" (UniqueName: \"kubernetes.io/projected/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-kube-api-access-5hq8j\") on node \"crc\" DevicePath \"\"" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.473152 4982 generic.go:334] "Generic (PLEG): container finished" podID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerID="4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39" exitCode=0 Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.473192 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dvvn" event={"ID":"0e4fa02a-ab02-4908-9cb5-6221a19a99c1","Type":"ContainerDied","Data":"4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39"} Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.473221 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dvvn" event={"ID":"0e4fa02a-ab02-4908-9cb5-6221a19a99c1","Type":"ContainerDied","Data":"442ec52bdc1c6e425d5ac0e7fdb80db8d9f103ba5f883803422959e5aef3283d"} Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.473238 4982 scope.go:117] "RemoveContainer" containerID="4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.473366 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dvvn" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.485840 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e4fa02a-ab02-4908-9cb5-6221a19a99c1" (UID: "0e4fa02a-ab02-4908-9cb5-6221a19a99c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.511211 4982 scope.go:117] "RemoveContainer" containerID="c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.528772 4982 scope.go:117] "RemoveContainer" containerID="832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.556091 4982 scope.go:117] "RemoveContainer" containerID="4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39" Jan 23 08:43:46 crc kubenswrapper[4982]: E0123 08:43:46.562160 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39\": container with ID starting with 4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39 not found: ID does not exist" containerID="4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.562203 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39"} err="failed to get container status \"4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39\": rpc error: code = NotFound desc = could not find container \"4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39\": container with ID starting with 4c6504e64b91f0b4cc9f22e8fc7aead7dd0a370587585f03be70921d836a1a39 not found: ID does not exist" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.562228 4982 scope.go:117] "RemoveContainer" containerID="c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0" Jan 23 08:43:46 crc kubenswrapper[4982]: E0123 08:43:46.562854 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0\": container with ID starting with c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0 not found: ID does not exist" containerID="c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.562918 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0"} err="failed to get container status \"c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0\": rpc error: code = NotFound desc = could not find container \"c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0\": container with ID starting with c1ab8132d33ee2d844995161e3651454c5e1c80db6681ae101789957ef7644c0 not found: ID does not exist" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.562948 4982 scope.go:117] "RemoveContainer" containerID="832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa" Jan 23 08:43:46 crc kubenswrapper[4982]: E0123 08:43:46.563344 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa\": container with ID starting with 832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa not found: ID does not exist" containerID="832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.563374 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa"} err="failed to get container status \"832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa\": rpc error: code = NotFound desc = could not find container \"832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa\": container with ID starting with 832ec00eb844822ed0dfa3860d7e18843a3f087acd871853182139cf400825fa not found: ID does not exist" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.571651 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e4fa02a-ab02-4908-9cb5-6221a19a99c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.803910 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dvvn"] Jan 23 08:43:46 crc kubenswrapper[4982]: I0123 08:43:46.811516 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2dvvn"] Jan 23 08:43:47 crc kubenswrapper[4982]: I0123 08:43:47.837265 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" path="/var/lib/kubelet/pods/0e4fa02a-ab02-4908-9cb5-6221a19a99c1/volumes" Jan 23 08:43:50 crc kubenswrapper[4982]: I0123 08:43:50.180814 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:50 crc kubenswrapper[4982]: I0123 08:43:50.181194 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:50 crc kubenswrapper[4982]: I0123 08:43:50.227986 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:50 crc kubenswrapper[4982]: I0123 08:43:50.547662 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:50 crc kubenswrapper[4982]: I0123 08:43:50.834179 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz2t6"] Jan 23 08:43:52 crc kubenswrapper[4982]: I0123 08:43:52.524032 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wz2t6" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="registry-server" containerID="cri-o://b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96" gracePeriod=2 Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.238724 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.366487 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffr55\" (UniqueName: \"kubernetes.io/projected/25815327-592c-4b4a-952e-18ddc9f24dff-kube-api-access-ffr55\") pod \"25815327-592c-4b4a-952e-18ddc9f24dff\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.366930 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-utilities\") pod \"25815327-592c-4b4a-952e-18ddc9f24dff\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.367054 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-catalog-content\") pod \"25815327-592c-4b4a-952e-18ddc9f24dff\" (UID: \"25815327-592c-4b4a-952e-18ddc9f24dff\") " Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.368376 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-utilities" (OuterVolumeSpecName: "utilities") pod "25815327-592c-4b4a-952e-18ddc9f24dff" (UID: "25815327-592c-4b4a-952e-18ddc9f24dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.376658 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25815327-592c-4b4a-952e-18ddc9f24dff-kube-api-access-ffr55" (OuterVolumeSpecName: "kube-api-access-ffr55") pod "25815327-592c-4b4a-952e-18ddc9f24dff" (UID: "25815327-592c-4b4a-952e-18ddc9f24dff"). InnerVolumeSpecName "kube-api-access-ffr55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.396096 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25815327-592c-4b4a-952e-18ddc9f24dff" (UID: "25815327-592c-4b4a-952e-18ddc9f24dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.469063 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.469098 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffr55\" (UniqueName: \"kubernetes.io/projected/25815327-592c-4b4a-952e-18ddc9f24dff-kube-api-access-ffr55\") on node \"crc\" DevicePath \"\"" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.469110 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25815327-592c-4b4a-952e-18ddc9f24dff-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.534043 4982 generic.go:334] "Generic (PLEG): container finished" podID="25815327-592c-4b4a-952e-18ddc9f24dff" containerID="b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96" exitCode=0 Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.534092 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz2t6" event={"ID":"25815327-592c-4b4a-952e-18ddc9f24dff","Type":"ContainerDied","Data":"b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96"} Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.534119 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wz2t6" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.534143 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wz2t6" event={"ID":"25815327-592c-4b4a-952e-18ddc9f24dff","Type":"ContainerDied","Data":"f8a16ab34df27c824373f3c7ed6ef6e1bff1ed30eaa0f27ae4cebd4953a2cc5e"} Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.534164 4982 scope.go:117] "RemoveContainer" containerID="b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.553114 4982 scope.go:117] "RemoveContainer" containerID="d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.571252 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz2t6"] Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.579021 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wz2t6"] Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.586112 4982 scope.go:117] "RemoveContainer" containerID="d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.610780 4982 scope.go:117] "RemoveContainer" containerID="b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96" Jan 23 08:43:53 crc kubenswrapper[4982]: E0123 08:43:53.611412 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96\": container with ID starting with b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96 not found: ID does not exist" containerID="b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.611449 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96"} err="failed to get container status \"b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96\": rpc error: code = NotFound desc = could not find container \"b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96\": container with ID starting with b054bcec2e93bf767f5227e4f3d097f2bb605705b53d4c4d8257602bde273e96 not found: ID does not exist" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.611475 4982 scope.go:117] "RemoveContainer" containerID="d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411" Jan 23 08:43:53 crc kubenswrapper[4982]: E0123 08:43:53.611924 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411\": container with ID starting with d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411 not found: ID does not exist" containerID="d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.611962 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411"} err="failed to get container status \"d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411\": rpc error: code = NotFound desc = could not find container \"d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411\": container with ID starting with d3d633dd1080fb82b06e7e071b539b75ab4f9aa9761071fca41ab8f1145d4411 not found: ID does not exist" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.611990 4982 scope.go:117] "RemoveContainer" containerID="d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d" Jan 23 08:43:53 crc kubenswrapper[4982]: E0123 08:43:53.612413 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d\": container with ID starting with d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d not found: ID does not exist" containerID="d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.612448 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d"} err="failed to get container status \"d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d\": rpc error: code = NotFound desc = could not find container \"d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d\": container with ID starting with d84a362f22fd96250e99977c48fbcedee35e3d9935b7f603e2f9d3e596a3230d not found: ID does not exist" Jan 23 08:43:53 crc kubenswrapper[4982]: I0123 08:43:53.847953 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" path="/var/lib/kubelet/pods/25815327-592c-4b4a-952e-18ddc9f24dff/volumes" Jan 23 08:44:11 crc kubenswrapper[4982]: I0123 08:44:11.435466 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:44:11 crc kubenswrapper[4982]: I0123 08:44:11.436219 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:44:41 crc kubenswrapper[4982]: I0123 08:44:41.435529 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:44:41 crc kubenswrapper[4982]: I0123 08:44:41.436193 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.155606 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2"] Jan 23 08:45:00 crc kubenswrapper[4982]: E0123 08:45:00.157502 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="extract-content" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157521 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="extract-content" Jan 23 08:45:00 crc kubenswrapper[4982]: E0123 08:45:00.157535 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="extract-utilities" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157541 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="extract-utilities" Jan 23 08:45:00 crc kubenswrapper[4982]: E0123 08:45:00.157555 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="registry-server" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157563 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="registry-server" Jan 23 08:45:00 crc kubenswrapper[4982]: E0123 08:45:00.157575 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="extract-content" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157581 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="extract-content" Jan 23 08:45:00 crc kubenswrapper[4982]: E0123 08:45:00.157594 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="registry-server" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157600 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="registry-server" Jan 23 08:45:00 crc kubenswrapper[4982]: E0123 08:45:00.157614 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="extract-utilities" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157638 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="extract-utilities" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157816 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e4fa02a-ab02-4908-9cb5-6221a19a99c1" containerName="registry-server" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.157836 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="25815327-592c-4b4a-952e-18ddc9f24dff" containerName="registry-server" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.158319 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.163103 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.163451 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.176738 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frf6n\" (UniqueName: \"kubernetes.io/projected/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-kube-api-access-frf6n\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.176817 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-secret-volume\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.176996 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-config-volume\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.196429 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2"] Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.278959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-config-volume\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.279016 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frf6n\" (UniqueName: \"kubernetes.io/projected/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-kube-api-access-frf6n\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.279057 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-secret-volume\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.280093 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-config-volume\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.285784 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-secret-volume\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.296478 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frf6n\" (UniqueName: \"kubernetes.io/projected/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-kube-api-access-frf6n\") pod \"collect-profiles-29485965-c86w2\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.498771 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:00 crc kubenswrapper[4982]: I0123 08:45:00.753673 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2"] Jan 23 08:45:01 crc kubenswrapper[4982]: I0123 08:45:01.136090 4982 generic.go:334] "Generic (PLEG): container finished" podID="c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" containerID="3c0ef27efac9f10b26b8490db8f315ad9da10ce1280d739cdc63f1454ff82c24" exitCode=0 Jan 23 08:45:01 crc kubenswrapper[4982]: I0123 08:45:01.136149 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" event={"ID":"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8","Type":"ContainerDied","Data":"3c0ef27efac9f10b26b8490db8f315ad9da10ce1280d739cdc63f1454ff82c24"} Jan 23 08:45:01 crc kubenswrapper[4982]: I0123 08:45:01.136409 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" event={"ID":"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8","Type":"ContainerStarted","Data":"900aaa3382def0ec3b1de5d257896d13d7a622bb1e280ef94f549c213f50f51e"} Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.421912 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.515478 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-config-volume\") pod \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.515605 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-secret-volume\") pod \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.515712 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frf6n\" (UniqueName: \"kubernetes.io/projected/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-kube-api-access-frf6n\") pod \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\" (UID: \"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8\") " Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.516353 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" (UID: "c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.520912 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" (UID: "c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.521000 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-kube-api-access-frf6n" (OuterVolumeSpecName: "kube-api-access-frf6n") pod "c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" (UID: "c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8"). InnerVolumeSpecName "kube-api-access-frf6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.617023 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frf6n\" (UniqueName: \"kubernetes.io/projected/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-kube-api-access-frf6n\") on node \"crc\" DevicePath \"\"" Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.617061 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:45:02 crc kubenswrapper[4982]: I0123 08:45:02.617073 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:45:03 crc kubenswrapper[4982]: I0123 08:45:03.155152 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" event={"ID":"c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8","Type":"ContainerDied","Data":"900aaa3382def0ec3b1de5d257896d13d7a622bb1e280ef94f549c213f50f51e"} Jan 23 08:45:03 crc kubenswrapper[4982]: I0123 08:45:03.155190 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="900aaa3382def0ec3b1de5d257896d13d7a622bb1e280ef94f549c213f50f51e" Jan 23 08:45:03 crc kubenswrapper[4982]: I0123 08:45:03.155196 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2" Jan 23 08:45:03 crc kubenswrapper[4982]: I0123 08:45:03.500480 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h"] Jan 23 08:45:03 crc kubenswrapper[4982]: I0123 08:45:03.508324 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485920-9ld8h"] Jan 23 08:45:03 crc kubenswrapper[4982]: I0123 08:45:03.838565 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c459257-75e2-4837-acf8-9ada114fe1bf" path="/var/lib/kubelet/pods/1c459257-75e2-4837-acf8-9ada114fe1bf/volumes" Jan 23 08:45:11 crc kubenswrapper[4982]: I0123 08:45:11.435961 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:45:11 crc kubenswrapper[4982]: I0123 08:45:11.436400 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:45:11 crc kubenswrapper[4982]: I0123 08:45:11.436444 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:45:11 crc kubenswrapper[4982]: I0123 08:45:11.437033 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:45:11 crc kubenswrapper[4982]: I0123 08:45:11.437082 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" gracePeriod=600 Jan 23 08:45:12 crc kubenswrapper[4982]: E0123 08:45:12.058023 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:45:12 crc kubenswrapper[4982]: I0123 08:45:12.230934 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" exitCode=0 Jan 23 08:45:12 crc kubenswrapper[4982]: I0123 08:45:12.230976 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736"} Jan 23 08:45:12 crc kubenswrapper[4982]: I0123 08:45:12.231403 4982 scope.go:117] "RemoveContainer" containerID="0c1e571d0b23feda2547c5e8ed8474731ee4de013907e867fabbfef49746d7eb" Jan 23 08:45:12 crc kubenswrapper[4982]: I0123 08:45:12.231967 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:45:12 crc kubenswrapper[4982]: E0123 08:45:12.232254 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:45:26 crc kubenswrapper[4982]: I0123 08:45:26.829213 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:45:26 crc kubenswrapper[4982]: E0123 08:45:26.830672 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:45:39 crc kubenswrapper[4982]: I0123 08:45:39.827064 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:45:39 crc kubenswrapper[4982]: E0123 08:45:39.827920 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:45:41 crc kubenswrapper[4982]: I0123 08:45:41.781056 4982 scope.go:117] "RemoveContainer" containerID="56561e505019d39265e49374c4be788e197504a3e7bf9dcb69689fa9b77b44b6" Jan 23 08:45:54 crc kubenswrapper[4982]: I0123 08:45:54.827575 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:45:54 crc kubenswrapper[4982]: E0123 08:45:54.828462 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:46:08 crc kubenswrapper[4982]: I0123 08:46:08.827751 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:46:08 crc kubenswrapper[4982]: E0123 08:46:08.829691 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:46:19 crc kubenswrapper[4982]: I0123 08:46:19.826989 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:46:19 crc kubenswrapper[4982]: E0123 08:46:19.827778 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:46:30 crc kubenswrapper[4982]: I0123 08:46:30.827881 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:46:30 crc kubenswrapper[4982]: E0123 08:46:30.828954 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:46:45 crc kubenswrapper[4982]: I0123 08:46:45.832790 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:46:45 crc kubenswrapper[4982]: E0123 08:46:45.833562 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:46:56 crc kubenswrapper[4982]: I0123 08:46:56.827397 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:46:56 crc kubenswrapper[4982]: E0123 08:46:56.828340 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:47:11 crc kubenswrapper[4982]: I0123 08:47:11.827995 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:47:11 crc kubenswrapper[4982]: E0123 08:47:11.828894 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:47:22 crc kubenswrapper[4982]: I0123 08:47:22.828436 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:47:22 crc kubenswrapper[4982]: E0123 08:47:22.829209 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:47:36 crc kubenswrapper[4982]: I0123 08:47:36.827493 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:47:36 crc kubenswrapper[4982]: E0123 08:47:36.828692 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:47:50 crc kubenswrapper[4982]: I0123 08:47:50.827210 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:47:50 crc kubenswrapper[4982]: E0123 08:47:50.828018 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:48:05 crc kubenswrapper[4982]: I0123 08:48:05.832930 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:48:05 crc kubenswrapper[4982]: E0123 08:48:05.833587 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:48:18 crc kubenswrapper[4982]: I0123 08:48:18.827519 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:48:18 crc kubenswrapper[4982]: E0123 08:48:18.828420 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:48:33 crc kubenswrapper[4982]: I0123 08:48:33.827756 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:48:33 crc kubenswrapper[4982]: E0123 08:48:33.828423 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:48:48 crc kubenswrapper[4982]: I0123 08:48:48.827140 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:48:48 crc kubenswrapper[4982]: E0123 08:48:48.828158 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:49:03 crc kubenswrapper[4982]: I0123 08:49:03.827741 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:49:03 crc kubenswrapper[4982]: E0123 08:49:03.828914 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:49:17 crc kubenswrapper[4982]: I0123 08:49:17.827719 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:49:17 crc kubenswrapper[4982]: E0123 08:49:17.828581 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:49:31 crc kubenswrapper[4982]: I0123 08:49:31.828304 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:49:31 crc kubenswrapper[4982]: E0123 08:49:31.829537 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:49:45 crc kubenswrapper[4982]: I0123 08:49:45.834385 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:49:45 crc kubenswrapper[4982]: E0123 08:49:45.835074 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:49:56 crc kubenswrapper[4982]: I0123 08:49:56.827377 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:49:56 crc kubenswrapper[4982]: E0123 08:49:56.828198 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:50:11 crc kubenswrapper[4982]: I0123 08:50:11.827460 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:50:12 crc kubenswrapper[4982]: I0123 08:50:12.612911 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"ac3b5df8ab218d181814be454c30580f21ff032d3db629a1deeba03790412065"} Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.000073 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxvq4"] Jan 23 08:52:21 crc kubenswrapper[4982]: E0123 08:52:21.001092 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" containerName="collect-profiles" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.001109 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" containerName="collect-profiles" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.001320 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" containerName="collect-profiles" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.004880 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.038210 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxvq4"] Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.066107 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-catalog-content\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.066175 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f74m7\" (UniqueName: \"kubernetes.io/projected/74cfe351-78c6-4cce-90a1-78b479fc939a-kube-api-access-f74m7\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.066240 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-utilities\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.167470 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-catalog-content\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.167858 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f74m7\" (UniqueName: \"kubernetes.io/projected/74cfe351-78c6-4cce-90a1-78b479fc939a-kube-api-access-f74m7\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.167918 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-utilities\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.168024 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-catalog-content\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.168346 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-utilities\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.186133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f74m7\" (UniqueName: \"kubernetes.io/projected/74cfe351-78c6-4cce-90a1-78b479fc939a-kube-api-access-f74m7\") pod \"certified-operators-gxvq4\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.396537 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:21 crc kubenswrapper[4982]: I0123 08:52:21.756661 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxvq4"] Jan 23 08:52:22 crc kubenswrapper[4982]: I0123 08:52:22.614640 4982 generic.go:334] "Generic (PLEG): container finished" podID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerID="bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1" exitCode=0 Jan 23 08:52:22 crc kubenswrapper[4982]: I0123 08:52:22.614938 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxvq4" event={"ID":"74cfe351-78c6-4cce-90a1-78b479fc939a","Type":"ContainerDied","Data":"bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1"} Jan 23 08:52:22 crc kubenswrapper[4982]: I0123 08:52:22.614970 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxvq4" event={"ID":"74cfe351-78c6-4cce-90a1-78b479fc939a","Type":"ContainerStarted","Data":"78a6cb6124f236b7692a6aba15d6e4d76cf4e059c6e5acb13f559a9d219740ea"} Jan 23 08:52:22 crc kubenswrapper[4982]: I0123 08:52:22.617236 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:52:24 crc kubenswrapper[4982]: I0123 08:52:24.631456 4982 generic.go:334] "Generic (PLEG): container finished" podID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerID="20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf" exitCode=0 Jan 23 08:52:24 crc kubenswrapper[4982]: I0123 08:52:24.631574 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxvq4" event={"ID":"74cfe351-78c6-4cce-90a1-78b479fc939a","Type":"ContainerDied","Data":"20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf"} Jan 23 08:52:25 crc kubenswrapper[4982]: I0123 08:52:25.641357 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxvq4" event={"ID":"74cfe351-78c6-4cce-90a1-78b479fc939a","Type":"ContainerStarted","Data":"85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72"} Jan 23 08:52:25 crc kubenswrapper[4982]: I0123 08:52:25.663078 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxvq4" podStartSLOduration=3.004620688 podStartE2EDuration="5.66306101s" podCreationTimestamp="2026-01-23 08:52:20 +0000 UTC" firstStartedPulling="2026-01-23 08:52:22.617030289 +0000 UTC m=+3417.097393675" lastFinishedPulling="2026-01-23 08:52:25.275470611 +0000 UTC m=+3419.755833997" observedRunningTime="2026-01-23 08:52:25.657985322 +0000 UTC m=+3420.138348728" watchObservedRunningTime="2026-01-23 08:52:25.66306101 +0000 UTC m=+3420.143424386" Jan 23 08:52:31 crc kubenswrapper[4982]: I0123 08:52:31.396773 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:31 crc kubenswrapper[4982]: I0123 08:52:31.397309 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:31 crc kubenswrapper[4982]: I0123 08:52:31.444456 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:31 crc kubenswrapper[4982]: I0123 08:52:31.749679 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:35 crc kubenswrapper[4982]: I0123 08:52:35.096825 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxvq4"] Jan 23 08:52:35 crc kubenswrapper[4982]: I0123 08:52:35.097434 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxvq4" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="registry-server" containerID="cri-o://85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72" gracePeriod=2 Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.629685 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.733391 4982 generic.go:334] "Generic (PLEG): container finished" podID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerID="85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72" exitCode=0 Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.733445 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxvq4" event={"ID":"74cfe351-78c6-4cce-90a1-78b479fc939a","Type":"ContainerDied","Data":"85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72"} Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.733484 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxvq4" event={"ID":"74cfe351-78c6-4cce-90a1-78b479fc939a","Type":"ContainerDied","Data":"78a6cb6124f236b7692a6aba15d6e4d76cf4e059c6e5acb13f559a9d219740ea"} Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.733483 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxvq4" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.733506 4982 scope.go:117] "RemoveContainer" containerID="85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.753312 4982 scope.go:117] "RemoveContainer" containerID="20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.776198 4982 scope.go:117] "RemoveContainer" containerID="bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.808724 4982 scope.go:117] "RemoveContainer" containerID="85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72" Jan 23 08:52:36 crc kubenswrapper[4982]: E0123 08:52:36.809336 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72\": container with ID starting with 85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72 not found: ID does not exist" containerID="85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.809391 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72"} err="failed to get container status \"85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72\": rpc error: code = NotFound desc = could not find container \"85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72\": container with ID starting with 85d87e75e0005782ba0e4622f4209a05ab3f814c1c7be69e76b1b896bd091e72 not found: ID does not exist" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.809424 4982 scope.go:117] "RemoveContainer" containerID="20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf" Jan 23 08:52:36 crc kubenswrapper[4982]: E0123 08:52:36.809872 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf\": container with ID starting with 20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf not found: ID does not exist" containerID="20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.809946 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf"} err="failed to get container status \"20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf\": rpc error: code = NotFound desc = could not find container \"20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf\": container with ID starting with 20d4f99d74bacb3531fd94e0e0a921fa08ce7a3bc0210a1a67fd4562d794bcbf not found: ID does not exist" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.810060 4982 scope.go:117] "RemoveContainer" containerID="bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1" Jan 23 08:52:36 crc kubenswrapper[4982]: E0123 08:52:36.810652 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1\": container with ID starting with bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1 not found: ID does not exist" containerID="bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.810707 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1"} err="failed to get container status \"bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1\": rpc error: code = NotFound desc = could not find container \"bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1\": container with ID starting with bbfbf5f3af9b34ecf1d257d858a838b21baaaff02f5b94f46be442d85ceed2d1 not found: ID does not exist" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.820394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f74m7\" (UniqueName: \"kubernetes.io/projected/74cfe351-78c6-4cce-90a1-78b479fc939a-kube-api-access-f74m7\") pod \"74cfe351-78c6-4cce-90a1-78b479fc939a\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.820485 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-utilities\") pod \"74cfe351-78c6-4cce-90a1-78b479fc939a\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.820547 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-catalog-content\") pod \"74cfe351-78c6-4cce-90a1-78b479fc939a\" (UID: \"74cfe351-78c6-4cce-90a1-78b479fc939a\") " Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.822811 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-utilities" (OuterVolumeSpecName: "utilities") pod "74cfe351-78c6-4cce-90a1-78b479fc939a" (UID: "74cfe351-78c6-4cce-90a1-78b479fc939a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.827853 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74cfe351-78c6-4cce-90a1-78b479fc939a-kube-api-access-f74m7" (OuterVolumeSpecName: "kube-api-access-f74m7") pod "74cfe351-78c6-4cce-90a1-78b479fc939a" (UID: "74cfe351-78c6-4cce-90a1-78b479fc939a"). InnerVolumeSpecName "kube-api-access-f74m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.871346 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74cfe351-78c6-4cce-90a1-78b479fc939a" (UID: "74cfe351-78c6-4cce-90a1-78b479fc939a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.922885 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f74m7\" (UniqueName: \"kubernetes.io/projected/74cfe351-78c6-4cce-90a1-78b479fc939a-kube-api-access-f74m7\") on node \"crc\" DevicePath \"\"" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.922929 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:52:36 crc kubenswrapper[4982]: I0123 08:52:36.922942 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74cfe351-78c6-4cce-90a1-78b479fc939a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:52:37 crc kubenswrapper[4982]: I0123 08:52:37.068260 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxvq4"] Jan 23 08:52:37 crc kubenswrapper[4982]: I0123 08:52:37.074430 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxvq4"] Jan 23 08:52:37 crc kubenswrapper[4982]: I0123 08:52:37.836982 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" path="/var/lib/kubelet/pods/74cfe351-78c6-4cce-90a1-78b479fc939a/volumes" Jan 23 08:52:41 crc kubenswrapper[4982]: I0123 08:52:41.435354 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:52:41 crc kubenswrapper[4982]: I0123 08:52:41.435648 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:53:11 crc kubenswrapper[4982]: I0123 08:53:11.435850 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:53:11 crc kubenswrapper[4982]: I0123 08:53:11.436447 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:53:41 crc kubenswrapper[4982]: I0123 08:53:41.435574 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:53:41 crc kubenswrapper[4982]: I0123 08:53:41.436257 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:53:41 crc kubenswrapper[4982]: I0123 08:53:41.436322 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:53:41 crc kubenswrapper[4982]: I0123 08:53:41.437069 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac3b5df8ab218d181814be454c30580f21ff032d3db629a1deeba03790412065"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:53:41 crc kubenswrapper[4982]: I0123 08:53:41.437126 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://ac3b5df8ab218d181814be454c30580f21ff032d3db629a1deeba03790412065" gracePeriod=600 Jan 23 08:53:42 crc kubenswrapper[4982]: I0123 08:53:42.245810 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="ac3b5df8ab218d181814be454c30580f21ff032d3db629a1deeba03790412065" exitCode=0 Jan 23 08:53:42 crc kubenswrapper[4982]: I0123 08:53:42.245863 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"ac3b5df8ab218d181814be454c30580f21ff032d3db629a1deeba03790412065"} Jan 23 08:53:42 crc kubenswrapper[4982]: I0123 08:53:42.246344 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6"} Jan 23 08:53:42 crc kubenswrapper[4982]: I0123 08:53:42.246373 4982 scope.go:117] "RemoveContainer" containerID="50163f7367eb96950ae125fffdf7dd640a9c1d13e9fe4f70a8deb3165c1be736" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.434543 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ztppz"] Jan 23 08:54:06 crc kubenswrapper[4982]: E0123 08:54:06.435511 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="registry-server" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.435530 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="registry-server" Jan 23 08:54:06 crc kubenswrapper[4982]: E0123 08:54:06.435563 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="extract-content" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.435570 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="extract-content" Jan 23 08:54:06 crc kubenswrapper[4982]: E0123 08:54:06.435602 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="extract-utilities" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.435612 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="extract-utilities" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.435825 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="74cfe351-78c6-4cce-90a1-78b479fc939a" containerName="registry-server" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.437413 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.446421 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztppz"] Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.611404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-catalog-content\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.611467 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd4j\" (UniqueName: \"kubernetes.io/projected/a88f6228-7402-4ec3-80fa-2da80eb12767-kube-api-access-fvd4j\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.611571 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-utilities\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.713084 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-utilities\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.713193 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-catalog-content\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.713213 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvd4j\" (UniqueName: \"kubernetes.io/projected/a88f6228-7402-4ec3-80fa-2da80eb12767-kube-api-access-fvd4j\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.713785 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-utilities\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.713841 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-catalog-content\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.737082 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvd4j\" (UniqueName: \"kubernetes.io/projected/a88f6228-7402-4ec3-80fa-2da80eb12767-kube-api-access-fvd4j\") pod \"redhat-operators-ztppz\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:06 crc kubenswrapper[4982]: I0123 08:54:06.773393 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:07 crc kubenswrapper[4982]: I0123 08:54:07.232938 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztppz"] Jan 23 08:54:07 crc kubenswrapper[4982]: I0123 08:54:07.453907 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztppz" event={"ID":"a88f6228-7402-4ec3-80fa-2da80eb12767","Type":"ContainerStarted","Data":"e0de633b28db3bc041d852c94e6db6418f6d4b0482cb6c04fc5051ea9d4c1389"} Jan 23 08:54:08 crc kubenswrapper[4982]: I0123 08:54:08.461774 4982 generic.go:334] "Generic (PLEG): container finished" podID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerID="8808be23cf7bdf7135a8963df23199771cfdc1eb929528c1882f028b5db48f1c" exitCode=0 Jan 23 08:54:08 crc kubenswrapper[4982]: I0123 08:54:08.461862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztppz" event={"ID":"a88f6228-7402-4ec3-80fa-2da80eb12767","Type":"ContainerDied","Data":"8808be23cf7bdf7135a8963df23199771cfdc1eb929528c1882f028b5db48f1c"} Jan 23 08:54:10 crc kubenswrapper[4982]: I0123 08:54:10.497565 4982 generic.go:334] "Generic (PLEG): container finished" podID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerID="1a43814faf1840889c85859747df7a858a75452c859c0d6d8183084145e9100e" exitCode=0 Jan 23 08:54:10 crc kubenswrapper[4982]: I0123 08:54:10.497743 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztppz" event={"ID":"a88f6228-7402-4ec3-80fa-2da80eb12767","Type":"ContainerDied","Data":"1a43814faf1840889c85859747df7a858a75452c859c0d6d8183084145e9100e"} Jan 23 08:54:11 crc kubenswrapper[4982]: I0123 08:54:11.513581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztppz" event={"ID":"a88f6228-7402-4ec3-80fa-2da80eb12767","Type":"ContainerStarted","Data":"972a08ae8ea7e5e9c61bc44d902b29d3f01d8dcf04d02b39bb5ecf3358c6cc6b"} Jan 23 08:54:11 crc kubenswrapper[4982]: I0123 08:54:11.546290 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ztppz" podStartSLOduration=2.9586981310000002 podStartE2EDuration="5.546263445s" podCreationTimestamp="2026-01-23 08:54:06 +0000 UTC" firstStartedPulling="2026-01-23 08:54:08.463487711 +0000 UTC m=+3522.943851097" lastFinishedPulling="2026-01-23 08:54:11.051053025 +0000 UTC m=+3525.531416411" observedRunningTime="2026-01-23 08:54:11.53074052 +0000 UTC m=+3526.011103916" watchObservedRunningTime="2026-01-23 08:54:11.546263445 +0000 UTC m=+3526.026626831" Jan 23 08:54:12 crc kubenswrapper[4982]: I0123 08:54:12.815752 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9p426"] Jan 23 08:54:12 crc kubenswrapper[4982]: I0123 08:54:12.817846 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:12 crc kubenswrapper[4982]: I0123 08:54:12.825952 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9p426"] Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.012508 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-utilities\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.012645 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-catalog-content\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.013575 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cfvc\" (UniqueName: \"kubernetes.io/projected/cb27b82f-c128-4887-b6bc-3c781b7a7b08-kube-api-access-9cfvc\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.115208 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-utilities\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.115298 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-catalog-content\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.115330 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cfvc\" (UniqueName: \"kubernetes.io/projected/cb27b82f-c128-4887-b6bc-3c781b7a7b08-kube-api-access-9cfvc\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.116251 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-utilities\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.116288 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-catalog-content\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.162338 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cfvc\" (UniqueName: \"kubernetes.io/projected/cb27b82f-c128-4887-b6bc-3c781b7a7b08-kube-api-access-9cfvc\") pod \"community-operators-9p426\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.434687 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:13 crc kubenswrapper[4982]: I0123 08:54:13.952910 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9p426"] Jan 23 08:54:13 crc kubenswrapper[4982]: W0123 08:54:13.961997 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb27b82f_c128_4887_b6bc_3c781b7a7b08.slice/crio-cea9e2c3e03e39edc928b3021bf82ca830e40f50c2ec17f7618e2753eab6dcca WatchSource:0}: Error finding container cea9e2c3e03e39edc928b3021bf82ca830e40f50c2ec17f7618e2753eab6dcca: Status 404 returned error can't find the container with id cea9e2c3e03e39edc928b3021bf82ca830e40f50c2ec17f7618e2753eab6dcca Jan 23 08:54:14 crc kubenswrapper[4982]: I0123 08:54:14.552748 4982 generic.go:334] "Generic (PLEG): container finished" podID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerID="0d6157919f14f18acdaafe5e6669ebdb9d72df0024ec396129de0660ea8b94e9" exitCode=0 Jan 23 08:54:14 crc kubenswrapper[4982]: I0123 08:54:14.552849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9p426" event={"ID":"cb27b82f-c128-4887-b6bc-3c781b7a7b08","Type":"ContainerDied","Data":"0d6157919f14f18acdaafe5e6669ebdb9d72df0024ec396129de0660ea8b94e9"} Jan 23 08:54:14 crc kubenswrapper[4982]: I0123 08:54:14.553089 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9p426" event={"ID":"cb27b82f-c128-4887-b6bc-3c781b7a7b08","Type":"ContainerStarted","Data":"cea9e2c3e03e39edc928b3021bf82ca830e40f50c2ec17f7618e2753eab6dcca"} Jan 23 08:54:16 crc kubenswrapper[4982]: I0123 08:54:16.774309 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:16 crc kubenswrapper[4982]: I0123 08:54:16.774660 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:16 crc kubenswrapper[4982]: I0123 08:54:16.818414 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:17 crc kubenswrapper[4982]: I0123 08:54:17.584609 4982 generic.go:334] "Generic (PLEG): container finished" podID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerID="50c0578b85955d9a05c3c9961e6762bd7a0f9501a61288a8f0e5cc864fbd7260" exitCode=0 Jan 23 08:54:17 crc kubenswrapper[4982]: I0123 08:54:17.586252 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9p426" event={"ID":"cb27b82f-c128-4887-b6bc-3c781b7a7b08","Type":"ContainerDied","Data":"50c0578b85955d9a05c3c9961e6762bd7a0f9501a61288a8f0e5cc864fbd7260"} Jan 23 08:54:17 crc kubenswrapper[4982]: I0123 08:54:17.642326 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:18 crc kubenswrapper[4982]: I0123 08:54:18.599149 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9p426" event={"ID":"cb27b82f-c128-4887-b6bc-3c781b7a7b08","Type":"ContainerStarted","Data":"d070ecb6bf4adf822681a2ca717ad15a2c1d82cc20140b137c69375000cc074b"} Jan 23 08:54:18 crc kubenswrapper[4982]: I0123 08:54:18.620980 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9p426" podStartSLOduration=3.105416754 podStartE2EDuration="6.620959917s" podCreationTimestamp="2026-01-23 08:54:12 +0000 UTC" firstStartedPulling="2026-01-23 08:54:14.555075063 +0000 UTC m=+3529.035438449" lastFinishedPulling="2026-01-23 08:54:18.070618236 +0000 UTC m=+3532.550981612" observedRunningTime="2026-01-23 08:54:18.615387934 +0000 UTC m=+3533.095751350" watchObservedRunningTime="2026-01-23 08:54:18.620959917 +0000 UTC m=+3533.101323303" Jan 23 08:54:19 crc kubenswrapper[4982]: I0123 08:54:19.210518 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ztppz"] Jan 23 08:54:20 crc kubenswrapper[4982]: I0123 08:54:20.612897 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ztppz" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="registry-server" containerID="cri-o://972a08ae8ea7e5e9c61bc44d902b29d3f01d8dcf04d02b39bb5ecf3358c6cc6b" gracePeriod=2 Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.630001 4982 generic.go:334] "Generic (PLEG): container finished" podID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerID="972a08ae8ea7e5e9c61bc44d902b29d3f01d8dcf04d02b39bb5ecf3358c6cc6b" exitCode=0 Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.630082 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztppz" event={"ID":"a88f6228-7402-4ec3-80fa-2da80eb12767","Type":"ContainerDied","Data":"972a08ae8ea7e5e9c61bc44d902b29d3f01d8dcf04d02b39bb5ecf3358c6cc6b"} Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.912348 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.987291 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvd4j\" (UniqueName: \"kubernetes.io/projected/a88f6228-7402-4ec3-80fa-2da80eb12767-kube-api-access-fvd4j\") pod \"a88f6228-7402-4ec3-80fa-2da80eb12767\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.987419 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-catalog-content\") pod \"a88f6228-7402-4ec3-80fa-2da80eb12767\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.987473 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-utilities\") pod \"a88f6228-7402-4ec3-80fa-2da80eb12767\" (UID: \"a88f6228-7402-4ec3-80fa-2da80eb12767\") " Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.988908 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-utilities" (OuterVolumeSpecName: "utilities") pod "a88f6228-7402-4ec3-80fa-2da80eb12767" (UID: "a88f6228-7402-4ec3-80fa-2da80eb12767"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:54:22 crc kubenswrapper[4982]: I0123 08:54:22.993519 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88f6228-7402-4ec3-80fa-2da80eb12767-kube-api-access-fvd4j" (OuterVolumeSpecName: "kube-api-access-fvd4j") pod "a88f6228-7402-4ec3-80fa-2da80eb12767" (UID: "a88f6228-7402-4ec3-80fa-2da80eb12767"). InnerVolumeSpecName "kube-api-access-fvd4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.090322 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.090386 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvd4j\" (UniqueName: \"kubernetes.io/projected/a88f6228-7402-4ec3-80fa-2da80eb12767-kube-api-access-fvd4j\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.109718 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a88f6228-7402-4ec3-80fa-2da80eb12767" (UID: "a88f6228-7402-4ec3-80fa-2da80eb12767"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.192329 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a88f6228-7402-4ec3-80fa-2da80eb12767-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.435484 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.435856 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.481315 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.640643 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztppz" event={"ID":"a88f6228-7402-4ec3-80fa-2da80eb12767","Type":"ContainerDied","Data":"e0de633b28db3bc041d852c94e6db6418f6d4b0482cb6c04fc5051ea9d4c1389"} Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.640696 4982 scope.go:117] "RemoveContainer" containerID="972a08ae8ea7e5e9c61bc44d902b29d3f01d8dcf04d02b39bb5ecf3358c6cc6b" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.640702 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztppz" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.668845 4982 scope.go:117] "RemoveContainer" containerID="1a43814faf1840889c85859747df7a858a75452c859c0d6d8183084145e9100e" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.676542 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ztppz"] Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.683566 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ztppz"] Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.693145 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.706737 4982 scope.go:117] "RemoveContainer" containerID="8808be23cf7bdf7135a8963df23199771cfdc1eb929528c1882f028b5db48f1c" Jan 23 08:54:23 crc kubenswrapper[4982]: I0123 08:54:23.837847 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" path="/var/lib/kubelet/pods/a88f6228-7402-4ec3-80fa-2da80eb12767/volumes" Jan 23 08:54:26 crc kubenswrapper[4982]: I0123 08:54:26.014507 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9p426"] Jan 23 08:54:26 crc kubenswrapper[4982]: I0123 08:54:26.014759 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9p426" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="registry-server" containerID="cri-o://d070ecb6bf4adf822681a2ca717ad15a2c1d82cc20140b137c69375000cc074b" gracePeriod=2 Jan 23 08:54:26 crc kubenswrapper[4982]: I0123 08:54:26.667308 4982 generic.go:334] "Generic (PLEG): container finished" podID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerID="d070ecb6bf4adf822681a2ca717ad15a2c1d82cc20140b137c69375000cc074b" exitCode=0 Jan 23 08:54:26 crc kubenswrapper[4982]: I0123 08:54:26.667474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9p426" event={"ID":"cb27b82f-c128-4887-b6bc-3c781b7a7b08","Type":"ContainerDied","Data":"d070ecb6bf4adf822681a2ca717ad15a2c1d82cc20140b137c69375000cc074b"} Jan 23 08:54:26 crc kubenswrapper[4982]: I0123 08:54:26.941537 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.049147 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-catalog-content\") pod \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.050000 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cfvc\" (UniqueName: \"kubernetes.io/projected/cb27b82f-c128-4887-b6bc-3c781b7a7b08-kube-api-access-9cfvc\") pod \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.050263 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-utilities\") pod \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\" (UID: \"cb27b82f-c128-4887-b6bc-3c781b7a7b08\") " Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.051603 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-utilities" (OuterVolumeSpecName: "utilities") pod "cb27b82f-c128-4887-b6bc-3c781b7a7b08" (UID: "cb27b82f-c128-4887-b6bc-3c781b7a7b08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.057684 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb27b82f-c128-4887-b6bc-3c781b7a7b08-kube-api-access-9cfvc" (OuterVolumeSpecName: "kube-api-access-9cfvc") pod "cb27b82f-c128-4887-b6bc-3c781b7a7b08" (UID: "cb27b82f-c128-4887-b6bc-3c781b7a7b08"). InnerVolumeSpecName "kube-api-access-9cfvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.109311 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb27b82f-c128-4887-b6bc-3c781b7a7b08" (UID: "cb27b82f-c128-4887-b6bc-3c781b7a7b08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.152091 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.152132 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb27b82f-c128-4887-b6bc-3c781b7a7b08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.152147 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cfvc\" (UniqueName: \"kubernetes.io/projected/cb27b82f-c128-4887-b6bc-3c781b7a7b08-kube-api-access-9cfvc\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.677972 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9p426" event={"ID":"cb27b82f-c128-4887-b6bc-3c781b7a7b08","Type":"ContainerDied","Data":"cea9e2c3e03e39edc928b3021bf82ca830e40f50c2ec17f7618e2753eab6dcca"} Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.678025 4982 scope.go:117] "RemoveContainer" containerID="d070ecb6bf4adf822681a2ca717ad15a2c1d82cc20140b137c69375000cc074b" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.678030 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9p426" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.698362 4982 scope.go:117] "RemoveContainer" containerID="50c0578b85955d9a05c3c9961e6762bd7a0f9501a61288a8f0e5cc864fbd7260" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.711862 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9p426"] Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.719048 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9p426"] Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.737876 4982 scope.go:117] "RemoveContainer" containerID="0d6157919f14f18acdaafe5e6669ebdb9d72df0024ec396129de0660ea8b94e9" Jan 23 08:54:27 crc kubenswrapper[4982]: I0123 08:54:27.846324 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" path="/var/lib/kubelet/pods/cb27b82f-c128-4887-b6bc-3c781b7a7b08/volumes" Jan 23 08:55:41 crc kubenswrapper[4982]: I0123 08:55:41.435664 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:55:41 crc kubenswrapper[4982]: I0123 08:55:41.436353 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:56:11 crc kubenswrapper[4982]: I0123 08:56:11.435495 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:56:11 crc kubenswrapper[4982]: I0123 08:56:11.436912 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.436114 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.436698 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.436748 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.437356 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.437438 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" gracePeriod=600 Jan 23 08:56:41 crc kubenswrapper[4982]: E0123 08:56:41.562033 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.923804 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" exitCode=0 Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.923858 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6"} Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.923896 4982 scope.go:117] "RemoveContainer" containerID="ac3b5df8ab218d181814be454c30580f21ff032d3db629a1deeba03790412065" Jan 23 08:56:41 crc kubenswrapper[4982]: I0123 08:56:41.924844 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:56:41 crc kubenswrapper[4982]: E0123 08:56:41.925106 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:56:53 crc kubenswrapper[4982]: I0123 08:56:53.827197 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:56:53 crc kubenswrapper[4982]: E0123 08:56:53.827918 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:57:06 crc kubenswrapper[4982]: I0123 08:57:06.827239 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:57:06 crc kubenswrapper[4982]: E0123 08:57:06.829042 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:57:18 crc kubenswrapper[4982]: I0123 08:57:18.827548 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:57:18 crc kubenswrapper[4982]: E0123 08:57:18.828417 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:57:32 crc kubenswrapper[4982]: I0123 08:57:32.827457 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:57:32 crc kubenswrapper[4982]: E0123 08:57:32.829482 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:57:47 crc kubenswrapper[4982]: I0123 08:57:47.827907 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:57:47 crc kubenswrapper[4982]: E0123 08:57:47.829018 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:58:02 crc kubenswrapper[4982]: I0123 08:58:02.827752 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:58:02 crc kubenswrapper[4982]: E0123 08:58:02.828423 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:58:16 crc kubenswrapper[4982]: I0123 08:58:16.827748 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:58:16 crc kubenswrapper[4982]: E0123 08:58:16.828591 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:58:30 crc kubenswrapper[4982]: I0123 08:58:30.828012 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:58:30 crc kubenswrapper[4982]: E0123 08:58:30.829299 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:58:43 crc kubenswrapper[4982]: I0123 08:58:43.828200 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:58:43 crc kubenswrapper[4982]: E0123 08:58:43.828953 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:58:56 crc kubenswrapper[4982]: I0123 08:58:56.827733 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:58:56 crc kubenswrapper[4982]: E0123 08:58:56.828526 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:59:07 crc kubenswrapper[4982]: I0123 08:59:07.828118 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:59:07 crc kubenswrapper[4982]: E0123 08:59:07.829431 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:59:19 crc kubenswrapper[4982]: I0123 08:59:19.828440 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:59:19 crc kubenswrapper[4982]: E0123 08:59:19.829156 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:59:31 crc kubenswrapper[4982]: I0123 08:59:31.827682 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:59:31 crc kubenswrapper[4982]: E0123 08:59:31.828233 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:59:45 crc kubenswrapper[4982]: I0123 08:59:45.834523 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:59:45 crc kubenswrapper[4982]: E0123 08:59:45.835253 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 08:59:59 crc kubenswrapper[4982]: I0123 08:59:59.827652 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 08:59:59 crc kubenswrapper[4982]: E0123 08:59:59.828387 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.177456 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k"] Jan 23 09:00:00 crc kubenswrapper[4982]: E0123 09:00:00.179202 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="extract-utilities" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179238 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="extract-utilities" Jan 23 09:00:00 crc kubenswrapper[4982]: E0123 09:00:00.179259 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="extract-content" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179266 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="extract-content" Jan 23 09:00:00 crc kubenswrapper[4982]: E0123 09:00:00.179278 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="registry-server" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179284 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="registry-server" Jan 23 09:00:00 crc kubenswrapper[4982]: E0123 09:00:00.179297 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="extract-utilities" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179303 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="extract-utilities" Jan 23 09:00:00 crc kubenswrapper[4982]: E0123 09:00:00.179311 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="extract-content" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179316 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="extract-content" Jan 23 09:00:00 crc kubenswrapper[4982]: E0123 09:00:00.179339 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="registry-server" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179344 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="registry-server" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179519 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb27b82f-c128-4887-b6bc-3c781b7a7b08" containerName="registry-server" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.179540 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88f6228-7402-4ec3-80fa-2da80eb12767" containerName="registry-server" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.180136 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.184557 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.184883 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.192313 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k"] Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.275327 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-secret-volume\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.275565 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-config-volume\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.275753 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttg8d\" (UniqueName: \"kubernetes.io/projected/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-kube-api-access-ttg8d\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.376841 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-config-volume\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.376925 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttg8d\" (UniqueName: \"kubernetes.io/projected/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-kube-api-access-ttg8d\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.376967 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-secret-volume\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.377960 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-config-volume\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.382913 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-secret-volume\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.397800 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttg8d\" (UniqueName: \"kubernetes.io/projected/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-kube-api-access-ttg8d\") pod \"collect-profiles-29485980-rft2k\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.502595 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:00 crc kubenswrapper[4982]: I0123 09:00:00.916702 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k"] Jan 23 09:00:01 crc kubenswrapper[4982]: I0123 09:00:01.466836 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" containerID="fd0fb3cd364215840f646ccf1432034acc8ff7c1f1aa25346ba8ba452889cfc1" exitCode=0 Jan 23 09:00:01 crc kubenswrapper[4982]: I0123 09:00:01.466904 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" event={"ID":"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4","Type":"ContainerDied","Data":"fd0fb3cd364215840f646ccf1432034acc8ff7c1f1aa25346ba8ba452889cfc1"} Jan 23 09:00:01 crc kubenswrapper[4982]: I0123 09:00:01.467217 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" event={"ID":"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4","Type":"ContainerStarted","Data":"273b0ca97a27ebf71555558e66361e1d3f6a53040b50dd40cc4718377ddee112"} Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.773570 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.820512 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttg8d\" (UniqueName: \"kubernetes.io/projected/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-kube-api-access-ttg8d\") pod \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.820741 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-secret-volume\") pod \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.820798 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-config-volume\") pod \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\" (UID: \"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4\") " Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.821354 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" (UID: "6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.828876 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-kube-api-access-ttg8d" (OuterVolumeSpecName: "kube-api-access-ttg8d") pod "6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" (UID: "6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4"). InnerVolumeSpecName "kube-api-access-ttg8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.828966 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" (UID: "6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.922960 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.923011 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttg8d\" (UniqueName: \"kubernetes.io/projected/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-kube-api-access-ttg8d\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:02 crc kubenswrapper[4982]: I0123 09:00:02.923026 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:03 crc kubenswrapper[4982]: I0123 09:00:03.482615 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" event={"ID":"6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4","Type":"ContainerDied","Data":"273b0ca97a27ebf71555558e66361e1d3f6a53040b50dd40cc4718377ddee112"} Jan 23 09:00:03 crc kubenswrapper[4982]: I0123 09:00:03.482935 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="273b0ca97a27ebf71555558e66361e1d3f6a53040b50dd40cc4718377ddee112" Jan 23 09:00:03 crc kubenswrapper[4982]: I0123 09:00:03.482689 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k" Jan 23 09:00:03 crc kubenswrapper[4982]: I0123 09:00:03.851747 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4"] Jan 23 09:00:03 crc kubenswrapper[4982]: I0123 09:00:03.859937 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485935-9t9m4"] Jan 23 09:00:05 crc kubenswrapper[4982]: I0123 09:00:05.836066 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80da01e2-2367-4834-9237-e25fe95a3728" path="/var/lib/kubelet/pods/80da01e2-2367-4834-9237-e25fe95a3728/volumes" Jan 23 09:00:13 crc kubenswrapper[4982]: I0123 09:00:13.827638 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:00:13 crc kubenswrapper[4982]: E0123 09:00:13.828494 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:00:27 crc kubenswrapper[4982]: I0123 09:00:27.827998 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:00:27 crc kubenswrapper[4982]: E0123 09:00:27.828730 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:00:39 crc kubenswrapper[4982]: I0123 09:00:39.827307 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:00:39 crc kubenswrapper[4982]: E0123 09:00:39.828039 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:00:42 crc kubenswrapper[4982]: I0123 09:00:42.104084 4982 scope.go:117] "RemoveContainer" containerID="ba70dd6792cc52f19b94f37785e5764d467316fb10b7d8334d3fa8a2c1a924d9" Jan 23 09:00:52 crc kubenswrapper[4982]: I0123 09:00:52.828014 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:00:52 crc kubenswrapper[4982]: E0123 09:00:52.828723 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:01:05 crc kubenswrapper[4982]: I0123 09:01:05.831968 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:01:05 crc kubenswrapper[4982]: E0123 09:01:05.832727 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:01:19 crc kubenswrapper[4982]: I0123 09:01:19.828201 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:01:19 crc kubenswrapper[4982]: E0123 09:01:19.829124 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:01:32 crc kubenswrapper[4982]: I0123 09:01:32.827470 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:01:32 crc kubenswrapper[4982]: E0123 09:01:32.828259 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:01:46 crc kubenswrapper[4982]: I0123 09:01:46.827713 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:01:47 crc kubenswrapper[4982]: I0123 09:01:47.428670 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"cf61c9ee9f022d2967874ce287097c590913d9b8658b34b3e6b18761aa7ac5b4"} Jan 23 09:02:38 crc kubenswrapper[4982]: I0123 09:02:38.963968 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4tsrn"] Jan 23 09:02:38 crc kubenswrapper[4982]: E0123 09:02:38.965121 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" containerName="collect-profiles" Jan 23 09:02:38 crc kubenswrapper[4982]: I0123 09:02:38.965141 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" containerName="collect-profiles" Jan 23 09:02:38 crc kubenswrapper[4982]: I0123 09:02:38.965407 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" containerName="collect-profiles" Jan 23 09:02:38 crc kubenswrapper[4982]: I0123 09:02:38.967361 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:38 crc kubenswrapper[4982]: I0123 09:02:38.980370 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tsrn"] Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.161802 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6frb9\" (UniqueName: \"kubernetes.io/projected/a56b0a31-716a-4c8e-8814-d90ba4ac6001-kube-api-access-6frb9\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.161849 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-catalog-content\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.161961 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-utilities\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.263296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-utilities\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.263431 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6frb9\" (UniqueName: \"kubernetes.io/projected/a56b0a31-716a-4c8e-8814-d90ba4ac6001-kube-api-access-6frb9\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.263456 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-catalog-content\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.264138 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-catalog-content\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.264139 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-utilities\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.288271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6frb9\" (UniqueName: \"kubernetes.io/projected/a56b0a31-716a-4c8e-8814-d90ba4ac6001-kube-api-access-6frb9\") pod \"certified-operators-4tsrn\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.290458 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:39 crc kubenswrapper[4982]: I0123 09:02:39.824463 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4tsrn"] Jan 23 09:02:40 crc kubenswrapper[4982]: E0123 09:02:40.701472 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda56b0a31_716a_4c8e_8814_d90ba4ac6001.slice/crio-conmon-1f37eccd3ea628cfaa917bb5333d6a484f0d3f34a82a2418350f81e6f9645569.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda56b0a31_716a_4c8e_8814_d90ba4ac6001.slice/crio-1f37eccd3ea628cfaa917bb5333d6a484f0d3f34a82a2418350f81e6f9645569.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:02:40 crc kubenswrapper[4982]: I0123 09:02:40.860916 4982 generic.go:334] "Generic (PLEG): container finished" podID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerID="1f37eccd3ea628cfaa917bb5333d6a484f0d3f34a82a2418350f81e6f9645569" exitCode=0 Jan 23 09:02:40 crc kubenswrapper[4982]: I0123 09:02:40.860966 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tsrn" event={"ID":"a56b0a31-716a-4c8e-8814-d90ba4ac6001","Type":"ContainerDied","Data":"1f37eccd3ea628cfaa917bb5333d6a484f0d3f34a82a2418350f81e6f9645569"} Jan 23 09:02:40 crc kubenswrapper[4982]: I0123 09:02:40.860991 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tsrn" event={"ID":"a56b0a31-716a-4c8e-8814-d90ba4ac6001","Type":"ContainerStarted","Data":"30e72e332f147d1ad7b6f296b75eae693cfa699bf204b431c06783c4f790c245"} Jan 23 09:02:40 crc kubenswrapper[4982]: I0123 09:02:40.863336 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:02:41 crc kubenswrapper[4982]: I0123 09:02:41.874331 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tsrn" event={"ID":"a56b0a31-716a-4c8e-8814-d90ba4ac6001","Type":"ContainerStarted","Data":"6ae32c5f0544e5ca9d3ef69658c5a1eaef76e69f983169e346aeab91dd952ce0"} Jan 23 09:02:42 crc kubenswrapper[4982]: I0123 09:02:42.882387 4982 generic.go:334] "Generic (PLEG): container finished" podID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerID="6ae32c5f0544e5ca9d3ef69658c5a1eaef76e69f983169e346aeab91dd952ce0" exitCode=0 Jan 23 09:02:42 crc kubenswrapper[4982]: I0123 09:02:42.882455 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tsrn" event={"ID":"a56b0a31-716a-4c8e-8814-d90ba4ac6001","Type":"ContainerDied","Data":"6ae32c5f0544e5ca9d3ef69658c5a1eaef76e69f983169e346aeab91dd952ce0"} Jan 23 09:02:43 crc kubenswrapper[4982]: I0123 09:02:43.893450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tsrn" event={"ID":"a56b0a31-716a-4c8e-8814-d90ba4ac6001","Type":"ContainerStarted","Data":"eb745526e1bf464aa4df76e1a2453dc5b33387c2c5fbcd9e4469bc460eec5541"} Jan 23 09:02:43 crc kubenswrapper[4982]: I0123 09:02:43.914712 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4tsrn" podStartSLOduration=3.440346386 podStartE2EDuration="5.914691373s" podCreationTimestamp="2026-01-23 09:02:38 +0000 UTC" firstStartedPulling="2026-01-23 09:02:40.863057852 +0000 UTC m=+4035.343421238" lastFinishedPulling="2026-01-23 09:02:43.337402829 +0000 UTC m=+4037.817766225" observedRunningTime="2026-01-23 09:02:43.912136194 +0000 UTC m=+4038.392499590" watchObservedRunningTime="2026-01-23 09:02:43.914691373 +0000 UTC m=+4038.395054759" Jan 23 09:02:49 crc kubenswrapper[4982]: I0123 09:02:49.291533 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:49 crc kubenswrapper[4982]: I0123 09:02:49.292130 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:49 crc kubenswrapper[4982]: I0123 09:02:49.333549 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:49 crc kubenswrapper[4982]: I0123 09:02:49.975953 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:50 crc kubenswrapper[4982]: I0123 09:02:50.021658 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tsrn"] Jan 23 09:02:51 crc kubenswrapper[4982]: I0123 09:02:51.958538 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4tsrn" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="registry-server" containerID="cri-o://eb745526e1bf464aa4df76e1a2453dc5b33387c2c5fbcd9e4469bc460eec5541" gracePeriod=2 Jan 23 09:02:52 crc kubenswrapper[4982]: I0123 09:02:52.967932 4982 generic.go:334] "Generic (PLEG): container finished" podID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerID="eb745526e1bf464aa4df76e1a2453dc5b33387c2c5fbcd9e4469bc460eec5541" exitCode=0 Jan 23 09:02:52 crc kubenswrapper[4982]: I0123 09:02:52.968010 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tsrn" event={"ID":"a56b0a31-716a-4c8e-8814-d90ba4ac6001","Type":"ContainerDied","Data":"eb745526e1bf464aa4df76e1a2453dc5b33387c2c5fbcd9e4469bc460eec5541"} Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.451972 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.500330 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6frb9\" (UniqueName: \"kubernetes.io/projected/a56b0a31-716a-4c8e-8814-d90ba4ac6001-kube-api-access-6frb9\") pod \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.500413 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-catalog-content\") pod \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.500441 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-utilities\") pod \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\" (UID: \"a56b0a31-716a-4c8e-8814-d90ba4ac6001\") " Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.501558 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-utilities" (OuterVolumeSpecName: "utilities") pod "a56b0a31-716a-4c8e-8814-d90ba4ac6001" (UID: "a56b0a31-716a-4c8e-8814-d90ba4ac6001"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.506261 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56b0a31-716a-4c8e-8814-d90ba4ac6001-kube-api-access-6frb9" (OuterVolumeSpecName: "kube-api-access-6frb9") pod "a56b0a31-716a-4c8e-8814-d90ba4ac6001" (UID: "a56b0a31-716a-4c8e-8814-d90ba4ac6001"). InnerVolumeSpecName "kube-api-access-6frb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.550926 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a56b0a31-716a-4c8e-8814-d90ba4ac6001" (UID: "a56b0a31-716a-4c8e-8814-d90ba4ac6001"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.601643 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.601938 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a56b0a31-716a-4c8e-8814-d90ba4ac6001-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.602010 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6frb9\" (UniqueName: \"kubernetes.io/projected/a56b0a31-716a-4c8e-8814-d90ba4ac6001-kube-api-access-6frb9\") on node \"crc\" DevicePath \"\"" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.980525 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4tsrn" event={"ID":"a56b0a31-716a-4c8e-8814-d90ba4ac6001","Type":"ContainerDied","Data":"30e72e332f147d1ad7b6f296b75eae693cfa699bf204b431c06783c4f790c245"} Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.980582 4982 scope.go:117] "RemoveContainer" containerID="eb745526e1bf464aa4df76e1a2453dc5b33387c2c5fbcd9e4469bc460eec5541" Jan 23 09:02:53 crc kubenswrapper[4982]: I0123 09:02:53.980586 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4tsrn" Jan 23 09:02:54 crc kubenswrapper[4982]: I0123 09:02:54.002109 4982 scope.go:117] "RemoveContainer" containerID="6ae32c5f0544e5ca9d3ef69658c5a1eaef76e69f983169e346aeab91dd952ce0" Jan 23 09:02:54 crc kubenswrapper[4982]: I0123 09:02:54.007048 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4tsrn"] Jan 23 09:02:54 crc kubenswrapper[4982]: I0123 09:02:54.015948 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4tsrn"] Jan 23 09:02:54 crc kubenswrapper[4982]: I0123 09:02:54.021503 4982 scope.go:117] "RemoveContainer" containerID="1f37eccd3ea628cfaa917bb5333d6a484f0d3f34a82a2418350f81e6f9645569" Jan 23 09:02:55 crc kubenswrapper[4982]: I0123 09:02:55.839875 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" path="/var/lib/kubelet/pods/a56b0a31-716a-4c8e-8814-d90ba4ac6001/volumes" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.568125 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2cfzh"] Jan 23 09:03:17 crc kubenswrapper[4982]: E0123 09:03:17.569867 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="extract-utilities" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.569886 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="extract-utilities" Jan 23 09:03:17 crc kubenswrapper[4982]: E0123 09:03:17.569899 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="extract-content" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.569905 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="extract-content" Jan 23 09:03:17 crc kubenswrapper[4982]: E0123 09:03:17.569933 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="registry-server" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.569940 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="registry-server" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.570184 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56b0a31-716a-4c8e-8814-d90ba4ac6001" containerName="registry-server" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.572616 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.586207 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cfzh"] Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.623274 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-utilities\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.623404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjx2\" (UniqueName: \"kubernetes.io/projected/30fc64d3-a890-40b0-99b7-8428766058f2-kube-api-access-vxjx2\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.623524 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-catalog-content\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.724913 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxjx2\" (UniqueName: \"kubernetes.io/projected/30fc64d3-a890-40b0-99b7-8428766058f2-kube-api-access-vxjx2\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.725006 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-catalog-content\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.725081 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-utilities\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.725615 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-utilities\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.725704 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-catalog-content\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.756935 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxjx2\" (UniqueName: \"kubernetes.io/projected/30fc64d3-a890-40b0-99b7-8428766058f2-kube-api-access-vxjx2\") pod \"redhat-marketplace-2cfzh\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:17 crc kubenswrapper[4982]: I0123 09:03:17.899322 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:18 crc kubenswrapper[4982]: I0123 09:03:18.373617 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cfzh"] Jan 23 09:03:19 crc kubenswrapper[4982]: I0123 09:03:19.189654 4982 generic.go:334] "Generic (PLEG): container finished" podID="30fc64d3-a890-40b0-99b7-8428766058f2" containerID="732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b" exitCode=0 Jan 23 09:03:19 crc kubenswrapper[4982]: I0123 09:03:19.189718 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cfzh" event={"ID":"30fc64d3-a890-40b0-99b7-8428766058f2","Type":"ContainerDied","Data":"732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b"} Jan 23 09:03:19 crc kubenswrapper[4982]: I0123 09:03:19.190008 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cfzh" event={"ID":"30fc64d3-a890-40b0-99b7-8428766058f2","Type":"ContainerStarted","Data":"eaee5e5f929b6ca82d6ea87e665badb70f7887cb82781cef3197d00448ab1747"} Jan 23 09:03:20 crc kubenswrapper[4982]: I0123 09:03:20.201271 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cfzh" event={"ID":"30fc64d3-a890-40b0-99b7-8428766058f2","Type":"ContainerStarted","Data":"aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f"} Jan 23 09:03:21 crc kubenswrapper[4982]: I0123 09:03:21.214373 4982 generic.go:334] "Generic (PLEG): container finished" podID="30fc64d3-a890-40b0-99b7-8428766058f2" containerID="aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f" exitCode=0 Jan 23 09:03:21 crc kubenswrapper[4982]: I0123 09:03:21.214418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cfzh" event={"ID":"30fc64d3-a890-40b0-99b7-8428766058f2","Type":"ContainerDied","Data":"aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f"} Jan 23 09:03:22 crc kubenswrapper[4982]: I0123 09:03:22.226829 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cfzh" event={"ID":"30fc64d3-a890-40b0-99b7-8428766058f2","Type":"ContainerStarted","Data":"ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e"} Jan 23 09:03:22 crc kubenswrapper[4982]: I0123 09:03:22.242054 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2cfzh" podStartSLOduration=2.839241338 podStartE2EDuration="5.242036918s" podCreationTimestamp="2026-01-23 09:03:17 +0000 UTC" firstStartedPulling="2026-01-23 09:03:19.190988533 +0000 UTC m=+4073.671351919" lastFinishedPulling="2026-01-23 09:03:21.593784113 +0000 UTC m=+4076.074147499" observedRunningTime="2026-01-23 09:03:22.240822886 +0000 UTC m=+4076.721186282" watchObservedRunningTime="2026-01-23 09:03:22.242036918 +0000 UTC m=+4076.722400304" Jan 23 09:03:27 crc kubenswrapper[4982]: I0123 09:03:27.899524 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:27 crc kubenswrapper[4982]: I0123 09:03:27.900166 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:27 crc kubenswrapper[4982]: I0123 09:03:27.948817 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:28 crc kubenswrapper[4982]: I0123 09:03:28.328807 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:28 crc kubenswrapper[4982]: I0123 09:03:28.379956 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cfzh"] Jan 23 09:03:30 crc kubenswrapper[4982]: I0123 09:03:30.295842 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2cfzh" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="registry-server" containerID="cri-o://ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e" gracePeriod=2 Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.183116 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.266614 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-utilities\") pod \"30fc64d3-a890-40b0-99b7-8428766058f2\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.266774 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxjx2\" (UniqueName: \"kubernetes.io/projected/30fc64d3-a890-40b0-99b7-8428766058f2-kube-api-access-vxjx2\") pod \"30fc64d3-a890-40b0-99b7-8428766058f2\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.266870 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-catalog-content\") pod \"30fc64d3-a890-40b0-99b7-8428766058f2\" (UID: \"30fc64d3-a890-40b0-99b7-8428766058f2\") " Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.267991 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-utilities" (OuterVolumeSpecName: "utilities") pod "30fc64d3-a890-40b0-99b7-8428766058f2" (UID: "30fc64d3-a890-40b0-99b7-8428766058f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.272442 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30fc64d3-a890-40b0-99b7-8428766058f2-kube-api-access-vxjx2" (OuterVolumeSpecName: "kube-api-access-vxjx2") pod "30fc64d3-a890-40b0-99b7-8428766058f2" (UID: "30fc64d3-a890-40b0-99b7-8428766058f2"). InnerVolumeSpecName "kube-api-access-vxjx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.290857 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30fc64d3-a890-40b0-99b7-8428766058f2" (UID: "30fc64d3-a890-40b0-99b7-8428766058f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.304947 4982 generic.go:334] "Generic (PLEG): container finished" podID="30fc64d3-a890-40b0-99b7-8428766058f2" containerID="ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e" exitCode=0 Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.304989 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cfzh" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.304991 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cfzh" event={"ID":"30fc64d3-a890-40b0-99b7-8428766058f2","Type":"ContainerDied","Data":"ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e"} Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.305088 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cfzh" event={"ID":"30fc64d3-a890-40b0-99b7-8428766058f2","Type":"ContainerDied","Data":"eaee5e5f929b6ca82d6ea87e665badb70f7887cb82781cef3197d00448ab1747"} Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.305104 4982 scope.go:117] "RemoveContainer" containerID="ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.324840 4982 scope.go:117] "RemoveContainer" containerID="aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.337877 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cfzh"] Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.344814 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cfzh"] Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.358351 4982 scope.go:117] "RemoveContainer" containerID="732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.368710 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.368738 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxjx2\" (UniqueName: \"kubernetes.io/projected/30fc64d3-a890-40b0-99b7-8428766058f2-kube-api-access-vxjx2\") on node \"crc\" DevicePath \"\"" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.368747 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30fc64d3-a890-40b0-99b7-8428766058f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.373462 4982 scope.go:117] "RemoveContainer" containerID="ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e" Jan 23 09:03:31 crc kubenswrapper[4982]: E0123 09:03:31.373969 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e\": container with ID starting with ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e not found: ID does not exist" containerID="ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.374011 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e"} err="failed to get container status \"ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e\": rpc error: code = NotFound desc = could not find container \"ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e\": container with ID starting with ab97a89fcb027b9d82318d8842e293bab269139c0b85dd146d5c44ac9211569e not found: ID does not exist" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.374039 4982 scope.go:117] "RemoveContainer" containerID="aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f" Jan 23 09:03:31 crc kubenswrapper[4982]: E0123 09:03:31.374343 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f\": container with ID starting with aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f not found: ID does not exist" containerID="aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.374372 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f"} err="failed to get container status \"aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f\": rpc error: code = NotFound desc = could not find container \"aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f\": container with ID starting with aea751a0e6ac4289b5f69042e9c9898f3c4f6b89cc92c81b1548f6be7f0d466f not found: ID does not exist" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.374393 4982 scope.go:117] "RemoveContainer" containerID="732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b" Jan 23 09:03:31 crc kubenswrapper[4982]: E0123 09:03:31.374723 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b\": container with ID starting with 732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b not found: ID does not exist" containerID="732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.374741 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b"} err="failed to get container status \"732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b\": rpc error: code = NotFound desc = could not find container \"732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b\": container with ID starting with 732426b50a39b2053a50ac0f3d9667c40e615d0c8211b1f6c28a1fd582228d7b not found: ID does not exist" Jan 23 09:03:31 crc kubenswrapper[4982]: I0123 09:03:31.838447 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" path="/var/lib/kubelet/pods/30fc64d3-a890-40b0-99b7-8428766058f2/volumes" Jan 23 09:04:11 crc kubenswrapper[4982]: I0123 09:04:11.435657 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:04:11 crc kubenswrapper[4982]: I0123 09:04:11.436449 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.175992 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wlqr5"] Jan 23 09:04:17 crc kubenswrapper[4982]: E0123 09:04:17.177016 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="extract-utilities" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.177035 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="extract-utilities" Jan 23 09:04:17 crc kubenswrapper[4982]: E0123 09:04:17.177055 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="registry-server" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.177062 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="registry-server" Jan 23 09:04:17 crc kubenswrapper[4982]: E0123 09:04:17.177076 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="extract-content" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.177083 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="extract-content" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.177328 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fc64d3-a890-40b0-99b7-8428766058f2" containerName="registry-server" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.178780 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.188979 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlqr5"] Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.205284 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnhfn\" (UniqueName: \"kubernetes.io/projected/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-kube-api-access-fnhfn\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.205369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-utilities\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.205390 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-catalog-content\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.306596 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnhfn\" (UniqueName: \"kubernetes.io/projected/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-kube-api-access-fnhfn\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.306692 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-utilities\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.306713 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-catalog-content\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.307262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-utilities\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.307298 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-catalog-content\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.325023 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnhfn\" (UniqueName: \"kubernetes.io/projected/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-kube-api-access-fnhfn\") pod \"community-operators-wlqr5\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:17 crc kubenswrapper[4982]: I0123 09:04:17.564012 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:18 crc kubenswrapper[4982]: I0123 09:04:18.123806 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlqr5"] Jan 23 09:04:18 crc kubenswrapper[4982]: I0123 09:04:18.700518 4982 generic.go:334] "Generic (PLEG): container finished" podID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerID="b0ac42d9c9d857d8c0c08ae54db90ac1ac2e1ee9a432140bc49428507f5819e1" exitCode=0 Jan 23 09:04:18 crc kubenswrapper[4982]: I0123 09:04:18.700596 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlqr5" event={"ID":"6b712112-fe1d-4b5a-8f3d-9a917e842cc8","Type":"ContainerDied","Data":"b0ac42d9c9d857d8c0c08ae54db90ac1ac2e1ee9a432140bc49428507f5819e1"} Jan 23 09:04:18 crc kubenswrapper[4982]: I0123 09:04:18.700885 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlqr5" event={"ID":"6b712112-fe1d-4b5a-8f3d-9a917e842cc8","Type":"ContainerStarted","Data":"57f90b1bcfaa1fdd8e027ca4440437415c90a2fa9e4936e5042ac6bfe4e22545"} Jan 23 09:04:19 crc kubenswrapper[4982]: I0123 09:04:19.709192 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlqr5" event={"ID":"6b712112-fe1d-4b5a-8f3d-9a917e842cc8","Type":"ContainerStarted","Data":"bdf574047cdb5f00686e3d2387b6abdd626aa35defca60ca78293f19a520c6a1"} Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.197552 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7r5bt"] Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.204546 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.239387 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7r5bt"] Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.254551 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brmx\" (UniqueName: \"kubernetes.io/projected/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-kube-api-access-4brmx\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.254673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-catalog-content\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.254750 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-utilities\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.355752 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-catalog-content\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.355858 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-utilities\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.355957 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brmx\" (UniqueName: \"kubernetes.io/projected/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-kube-api-access-4brmx\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.356348 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-catalog-content\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.356643 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-utilities\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.377924 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brmx\" (UniqueName: \"kubernetes.io/projected/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-kube-api-access-4brmx\") pod \"redhat-operators-7r5bt\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.539724 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.725230 4982 generic.go:334] "Generic (PLEG): container finished" podID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerID="bdf574047cdb5f00686e3d2387b6abdd626aa35defca60ca78293f19a520c6a1" exitCode=0 Jan 23 09:04:20 crc kubenswrapper[4982]: I0123 09:04:20.725446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlqr5" event={"ID":"6b712112-fe1d-4b5a-8f3d-9a917e842cc8","Type":"ContainerDied","Data":"bdf574047cdb5f00686e3d2387b6abdd626aa35defca60ca78293f19a520c6a1"} Jan 23 09:04:21 crc kubenswrapper[4982]: I0123 09:04:21.054109 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7r5bt"] Jan 23 09:04:21 crc kubenswrapper[4982]: I0123 09:04:21.736361 4982 generic.go:334] "Generic (PLEG): container finished" podID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerID="826825ec089ddc991ba5674b8ec1ca0001cfe9f90277efb80e8d4b93fd20547e" exitCode=0 Jan 23 09:04:21 crc kubenswrapper[4982]: I0123 09:04:21.736489 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r5bt" event={"ID":"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc","Type":"ContainerDied","Data":"826825ec089ddc991ba5674b8ec1ca0001cfe9f90277efb80e8d4b93fd20547e"} Jan 23 09:04:21 crc kubenswrapper[4982]: I0123 09:04:21.737019 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r5bt" event={"ID":"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc","Type":"ContainerStarted","Data":"258ad41527c8090d75560567fc9180cf7eb073bbb93765370095348cb2a8bd04"} Jan 23 09:04:21 crc kubenswrapper[4982]: I0123 09:04:21.744967 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlqr5" event={"ID":"6b712112-fe1d-4b5a-8f3d-9a917e842cc8","Type":"ContainerStarted","Data":"a7ea0e0b3f23955a1957b22e9d8ca6ba1c23d6d543d06b551e81105953bdebc0"} Jan 23 09:04:21 crc kubenswrapper[4982]: I0123 09:04:21.789139 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wlqr5" podStartSLOduration=2.29260262 podStartE2EDuration="4.789114853s" podCreationTimestamp="2026-01-23 09:04:17 +0000 UTC" firstStartedPulling="2026-01-23 09:04:18.701945766 +0000 UTC m=+4133.182309152" lastFinishedPulling="2026-01-23 09:04:21.198457999 +0000 UTC m=+4135.678821385" observedRunningTime="2026-01-23 09:04:21.787361776 +0000 UTC m=+4136.267725172" watchObservedRunningTime="2026-01-23 09:04:21.789114853 +0000 UTC m=+4136.269478239" Jan 23 09:04:23 crc kubenswrapper[4982]: I0123 09:04:23.772010 4982 generic.go:334] "Generic (PLEG): container finished" podID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerID="784b27932cdf7a7201438dbd7128f39279a2b37d8cd0cb849e4221074eeee11e" exitCode=0 Jan 23 09:04:23 crc kubenswrapper[4982]: I0123 09:04:23.772086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r5bt" event={"ID":"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc","Type":"ContainerDied","Data":"784b27932cdf7a7201438dbd7128f39279a2b37d8cd0cb849e4221074eeee11e"} Jan 23 09:04:24 crc kubenswrapper[4982]: I0123 09:04:24.782450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r5bt" event={"ID":"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc","Type":"ContainerStarted","Data":"3ba5114852f688db018d3444ae3dce47707446013d3de49c8f4a3d962b9fa1c6"} Jan 23 09:04:24 crc kubenswrapper[4982]: I0123 09:04:24.798862 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7r5bt" podStartSLOduration=2.314307785 podStartE2EDuration="4.798846106s" podCreationTimestamp="2026-01-23 09:04:20 +0000 UTC" firstStartedPulling="2026-01-23 09:04:21.738891841 +0000 UTC m=+4136.219255237" lastFinishedPulling="2026-01-23 09:04:24.223430172 +0000 UTC m=+4138.703793558" observedRunningTime="2026-01-23 09:04:24.797668624 +0000 UTC m=+4139.278032040" watchObservedRunningTime="2026-01-23 09:04:24.798846106 +0000 UTC m=+4139.279209492" Jan 23 09:04:27 crc kubenswrapper[4982]: I0123 09:04:27.564583 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:27 crc kubenswrapper[4982]: I0123 09:04:27.564912 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:27 crc kubenswrapper[4982]: I0123 09:04:27.690348 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:27 crc kubenswrapper[4982]: I0123 09:04:27.849849 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:29 crc kubenswrapper[4982]: I0123 09:04:29.766890 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlqr5"] Jan 23 09:04:29 crc kubenswrapper[4982]: I0123 09:04:29.815937 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wlqr5" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="registry-server" containerID="cri-o://a7ea0e0b3f23955a1957b22e9d8ca6ba1c23d6d543d06b551e81105953bdebc0" gracePeriod=2 Jan 23 09:04:30 crc kubenswrapper[4982]: I0123 09:04:30.540595 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:30 crc kubenswrapper[4982]: I0123 09:04:30.540830 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:30 crc kubenswrapper[4982]: I0123 09:04:30.580143 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:30 crc kubenswrapper[4982]: I0123 09:04:30.863267 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:31 crc kubenswrapper[4982]: I0123 09:04:31.838018 4982 generic.go:334] "Generic (PLEG): container finished" podID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerID="a7ea0e0b3f23955a1957b22e9d8ca6ba1c23d6d543d06b551e81105953bdebc0" exitCode=0 Jan 23 09:04:31 crc kubenswrapper[4982]: I0123 09:04:31.850643 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlqr5" event={"ID":"6b712112-fe1d-4b5a-8f3d-9a917e842cc8","Type":"ContainerDied","Data":"a7ea0e0b3f23955a1957b22e9d8ca6ba1c23d6d543d06b551e81105953bdebc0"} Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.092808 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.181910 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7r5bt"] Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.185369 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnhfn\" (UniqueName: \"kubernetes.io/projected/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-kube-api-access-fnhfn\") pod \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.185501 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-catalog-content\") pod \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.185532 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-utilities\") pod \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\" (UID: \"6b712112-fe1d-4b5a-8f3d-9a917e842cc8\") " Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.186871 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-utilities" (OuterVolumeSpecName: "utilities") pod "6b712112-fe1d-4b5a-8f3d-9a917e842cc8" (UID: "6b712112-fe1d-4b5a-8f3d-9a917e842cc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.191237 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.236848 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-kube-api-access-fnhfn" (OuterVolumeSpecName: "kube-api-access-fnhfn") pod "6b712112-fe1d-4b5a-8f3d-9a917e842cc8" (UID: "6b712112-fe1d-4b5a-8f3d-9a917e842cc8"). InnerVolumeSpecName "kube-api-access-fnhfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.260737 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b712112-fe1d-4b5a-8f3d-9a917e842cc8" (UID: "6b712112-fe1d-4b5a-8f3d-9a917e842cc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.292581 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.292638 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnhfn\" (UniqueName: \"kubernetes.io/projected/6b712112-fe1d-4b5a-8f3d-9a917e842cc8-kube-api-access-fnhfn\") on node \"crc\" DevicePath \"\"" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.852752 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlqr5" event={"ID":"6b712112-fe1d-4b5a-8f3d-9a917e842cc8","Type":"ContainerDied","Data":"57f90b1bcfaa1fdd8e027ca4440437415c90a2fa9e4936e5042ac6bfe4e22545"} Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.852795 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlqr5" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.852815 4982 scope.go:117] "RemoveContainer" containerID="a7ea0e0b3f23955a1957b22e9d8ca6ba1c23d6d543d06b551e81105953bdebc0" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.890929 4982 scope.go:117] "RemoveContainer" containerID="bdf574047cdb5f00686e3d2387b6abdd626aa35defca60ca78293f19a520c6a1" Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.892889 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlqr5"] Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.899745 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wlqr5"] Jan 23 09:04:32 crc kubenswrapper[4982]: I0123 09:04:32.916487 4982 scope.go:117] "RemoveContainer" containerID="b0ac42d9c9d857d8c0c08ae54db90ac1ac2e1ee9a432140bc49428507f5819e1" Jan 23 09:04:32 crc kubenswrapper[4982]: E0123 09:04:32.961458 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b712112_fe1d_4b5a_8f3d_9a917e842cc8.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b712112_fe1d_4b5a_8f3d_9a917e842cc8.slice/crio-57f90b1bcfaa1fdd8e027ca4440437415c90a2fa9e4936e5042ac6bfe4e22545\": RecentStats: unable to find data in memory cache]" Jan 23 09:04:33 crc kubenswrapper[4982]: I0123 09:04:33.837809 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" path="/var/lib/kubelet/pods/6b712112-fe1d-4b5a-8f3d-9a917e842cc8/volumes" Jan 23 09:04:33 crc kubenswrapper[4982]: I0123 09:04:33.861656 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7r5bt" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="registry-server" containerID="cri-o://3ba5114852f688db018d3444ae3dce47707446013d3de49c8f4a3d962b9fa1c6" gracePeriod=2 Jan 23 09:04:35 crc kubenswrapper[4982]: I0123 09:04:35.878738 4982 generic.go:334] "Generic (PLEG): container finished" podID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerID="3ba5114852f688db018d3444ae3dce47707446013d3de49c8f4a3d962b9fa1c6" exitCode=0 Jan 23 09:04:35 crc kubenswrapper[4982]: I0123 09:04:35.879105 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r5bt" event={"ID":"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc","Type":"ContainerDied","Data":"3ba5114852f688db018d3444ae3dce47707446013d3de49c8f4a3d962b9fa1c6"} Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.089310 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.150225 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-utilities\") pod \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.150303 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4brmx\" (UniqueName: \"kubernetes.io/projected/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-kube-api-access-4brmx\") pod \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.150420 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-catalog-content\") pod \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\" (UID: \"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc\") " Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.151004 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-utilities" (OuterVolumeSpecName: "utilities") pod "f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" (UID: "f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.160984 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-kube-api-access-4brmx" (OuterVolumeSpecName: "kube-api-access-4brmx") pod "f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" (UID: "f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc"). InnerVolumeSpecName "kube-api-access-4brmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.252728 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.252784 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4brmx\" (UniqueName: \"kubernetes.io/projected/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-kube-api-access-4brmx\") on node \"crc\" DevicePath \"\"" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.273613 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" (UID: "f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.353927 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.888570 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7r5bt" event={"ID":"f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc","Type":"ContainerDied","Data":"258ad41527c8090d75560567fc9180cf7eb073bbb93765370095348cb2a8bd04"} Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.888692 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7r5bt" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.888974 4982 scope.go:117] "RemoveContainer" containerID="3ba5114852f688db018d3444ae3dce47707446013d3de49c8f4a3d962b9fa1c6" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.909557 4982 scope.go:117] "RemoveContainer" containerID="784b27932cdf7a7201438dbd7128f39279a2b37d8cd0cb849e4221074eeee11e" Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.921531 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7r5bt"] Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.941660 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7r5bt"] Jan 23 09:04:36 crc kubenswrapper[4982]: I0123 09:04:36.962509 4982 scope.go:117] "RemoveContainer" containerID="826825ec089ddc991ba5674b8ec1ca0001cfe9f90277efb80e8d4b93fd20547e" Jan 23 09:04:37 crc kubenswrapper[4982]: I0123 09:04:37.836108 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" path="/var/lib/kubelet/pods/f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc/volumes" Jan 23 09:04:41 crc kubenswrapper[4982]: I0123 09:04:41.435750 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:04:41 crc kubenswrapper[4982]: I0123 09:04:41.436285 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:05:11 crc kubenswrapper[4982]: I0123 09:05:11.435748 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:05:11 crc kubenswrapper[4982]: I0123 09:05:11.436266 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:05:11 crc kubenswrapper[4982]: I0123 09:05:11.436304 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:05:11 crc kubenswrapper[4982]: I0123 09:05:11.437742 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf61c9ee9f022d2967874ce287097c590913d9b8658b34b3e6b18761aa7ac5b4"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:05:11 crc kubenswrapper[4982]: I0123 09:05:11.437801 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://cf61c9ee9f022d2967874ce287097c590913d9b8658b34b3e6b18761aa7ac5b4" gracePeriod=600 Jan 23 09:05:12 crc kubenswrapper[4982]: I0123 09:05:12.143374 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="cf61c9ee9f022d2967874ce287097c590913d9b8658b34b3e6b18761aa7ac5b4" exitCode=0 Jan 23 09:05:12 crc kubenswrapper[4982]: I0123 09:05:12.143460 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"cf61c9ee9f022d2967874ce287097c590913d9b8658b34b3e6b18761aa7ac5b4"} Jan 23 09:05:12 crc kubenswrapper[4982]: I0123 09:05:12.143748 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561"} Jan 23 09:05:12 crc kubenswrapper[4982]: I0123 09:05:12.143771 4982 scope.go:117] "RemoveContainer" containerID="d078d94e017cc82a6be6c39a869dbc012dadf92423920e173e25b1e28d77fff6" Jan 23 09:07:11 crc kubenswrapper[4982]: I0123 09:07:11.435548 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:07:11 crc kubenswrapper[4982]: I0123 09:07:11.436217 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:07:41 crc kubenswrapper[4982]: I0123 09:07:41.435697 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:07:41 crc kubenswrapper[4982]: I0123 09:07:41.436205 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.436040 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.437505 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.437662 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.438518 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.438675 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" gracePeriod=600 Jan 23 09:08:11 crc kubenswrapper[4982]: E0123 09:08:11.561786 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.564951 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" exitCode=0 Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.565003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561"} Jan 23 09:08:11 crc kubenswrapper[4982]: I0123 09:08:11.565039 4982 scope.go:117] "RemoveContainer" containerID="cf61c9ee9f022d2967874ce287097c590913d9b8658b34b3e6b18761aa7ac5b4" Jan 23 09:08:12 crc kubenswrapper[4982]: I0123 09:08:12.576397 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:08:12 crc kubenswrapper[4982]: E0123 09:08:12.576658 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:08:25 crc kubenswrapper[4982]: I0123 09:08:25.834051 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:08:25 crc kubenswrapper[4982]: E0123 09:08:25.834949 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:08:40 crc kubenswrapper[4982]: I0123 09:08:40.827085 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:08:40 crc kubenswrapper[4982]: E0123 09:08:40.827816 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:08:53 crc kubenswrapper[4982]: I0123 09:08:53.827479 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:08:53 crc kubenswrapper[4982]: E0123 09:08:53.828164 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:09:04 crc kubenswrapper[4982]: I0123 09:09:04.826968 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:09:04 crc kubenswrapper[4982]: E0123 09:09:04.827599 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:09:17 crc kubenswrapper[4982]: I0123 09:09:17.827345 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:09:17 crc kubenswrapper[4982]: E0123 09:09:17.828719 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:09:28 crc kubenswrapper[4982]: I0123 09:09:28.827393 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:09:28 crc kubenswrapper[4982]: E0123 09:09:28.828147 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:09:42 crc kubenswrapper[4982]: I0123 09:09:42.827112 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:09:42 crc kubenswrapper[4982]: E0123 09:09:42.827697 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:09:54 crc kubenswrapper[4982]: I0123 09:09:54.827615 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:09:54 crc kubenswrapper[4982]: E0123 09:09:54.828973 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:10:07 crc kubenswrapper[4982]: I0123 09:10:07.830822 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:10:07 crc kubenswrapper[4982]: E0123 09:10:07.831640 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:10:22 crc kubenswrapper[4982]: I0123 09:10:22.827649 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:10:22 crc kubenswrapper[4982]: E0123 09:10:22.828378 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:10:36 crc kubenswrapper[4982]: I0123 09:10:36.827472 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:10:36 crc kubenswrapper[4982]: E0123 09:10:36.828087 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:10:51 crc kubenswrapper[4982]: I0123 09:10:51.827520 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:10:51 crc kubenswrapper[4982]: E0123 09:10:51.828248 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:11:02 crc kubenswrapper[4982]: I0123 09:11:02.827847 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:11:02 crc kubenswrapper[4982]: E0123 09:11:02.828898 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:11:14 crc kubenswrapper[4982]: I0123 09:11:14.827892 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:11:14 crc kubenswrapper[4982]: E0123 09:11:14.829071 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:11:27 crc kubenswrapper[4982]: I0123 09:11:27.827499 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:11:27 crc kubenswrapper[4982]: E0123 09:11:27.828298 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:11:42 crc kubenswrapper[4982]: I0123 09:11:42.827373 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:11:42 crc kubenswrapper[4982]: E0123 09:11:42.828081 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:11:53 crc kubenswrapper[4982]: I0123 09:11:53.827487 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:11:53 crc kubenswrapper[4982]: E0123 09:11:53.828361 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:12:06 crc kubenswrapper[4982]: I0123 09:12:06.827538 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:12:06 crc kubenswrapper[4982]: E0123 09:12:06.828443 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:12:18 crc kubenswrapper[4982]: I0123 09:12:18.828015 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:12:18 crc kubenswrapper[4982]: E0123 09:12:18.829406 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:12:30 crc kubenswrapper[4982]: I0123 09:12:30.828583 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:12:30 crc kubenswrapper[4982]: E0123 09:12:30.829961 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:12:43 crc kubenswrapper[4982]: I0123 09:12:43.827510 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:12:43 crc kubenswrapper[4982]: E0123 09:12:43.829476 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.849493 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqmg7"] Jan 23 09:12:54 crc kubenswrapper[4982]: E0123 09:12:54.850388 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="extract-utilities" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850409 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="extract-utilities" Jan 23 09:12:54 crc kubenswrapper[4982]: E0123 09:12:54.850429 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="registry-server" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850437 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="registry-server" Jan 23 09:12:54 crc kubenswrapper[4982]: E0123 09:12:54.850591 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="extract-utilities" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850602 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="extract-utilities" Jan 23 09:12:54 crc kubenswrapper[4982]: E0123 09:12:54.850616 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="extract-content" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850643 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="extract-content" Jan 23 09:12:54 crc kubenswrapper[4982]: E0123 09:12:54.850664 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="extract-content" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850672 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="extract-content" Jan 23 09:12:54 crc kubenswrapper[4982]: E0123 09:12:54.850684 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="registry-server" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850691 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="registry-server" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850907 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b712112-fe1d-4b5a-8f3d-9a917e842cc8" containerName="registry-server" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.850925 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f300d5a0-6591-445a-bf1b-d4e0ba5e8dcc" containerName="registry-server" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.852359 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.872952 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqmg7"] Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.935650 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjb5p\" (UniqueName: \"kubernetes.io/projected/e227a2fe-1594-4838-b6ed-408bb1730273-kube-api-access-rjb5p\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.936677 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e227a2fe-1594-4838-b6ed-408bb1730273-utilities\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:54 crc kubenswrapper[4982]: I0123 09:12:54.936714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e227a2fe-1594-4838-b6ed-408bb1730273-catalog-content\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.038781 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjb5p\" (UniqueName: \"kubernetes.io/projected/e227a2fe-1594-4838-b6ed-408bb1730273-kube-api-access-rjb5p\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.038856 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e227a2fe-1594-4838-b6ed-408bb1730273-utilities\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.038928 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e227a2fe-1594-4838-b6ed-408bb1730273-catalog-content\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.039473 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e227a2fe-1594-4838-b6ed-408bb1730273-catalog-content\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.039856 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e227a2fe-1594-4838-b6ed-408bb1730273-utilities\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.063550 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjb5p\" (UniqueName: \"kubernetes.io/projected/e227a2fe-1594-4838-b6ed-408bb1730273-kube-api-access-rjb5p\") pod \"certified-operators-rqmg7\" (UID: \"e227a2fe-1594-4838-b6ed-408bb1730273\") " pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.172325 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.525388 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqmg7"] Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.926451 4982 generic.go:334] "Generic (PLEG): container finished" podID="e227a2fe-1594-4838-b6ed-408bb1730273" containerID="767d3d3fd8be4816679d24b277c5044f61dfd330ba4c182b8618e4b6ee677172" exitCode=0 Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.926500 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqmg7" event={"ID":"e227a2fe-1594-4838-b6ed-408bb1730273","Type":"ContainerDied","Data":"767d3d3fd8be4816679d24b277c5044f61dfd330ba4c182b8618e4b6ee677172"} Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.926548 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqmg7" event={"ID":"e227a2fe-1594-4838-b6ed-408bb1730273","Type":"ContainerStarted","Data":"6cd45fbacc6132b61ad2396df060abfeb8b884920236294fc7b2a0edc22e36be"} Jan 23 09:12:55 crc kubenswrapper[4982]: I0123 09:12:55.928738 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:12:56 crc kubenswrapper[4982]: I0123 09:12:56.827967 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:12:56 crc kubenswrapper[4982]: E0123 09:12:56.828461 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:13:00 crc kubenswrapper[4982]: I0123 09:13:00.982888 4982 generic.go:334] "Generic (PLEG): container finished" podID="e227a2fe-1594-4838-b6ed-408bb1730273" containerID="60237c114b33c30ce4f20a64901658fac45d78bfb321dfea8e98f209595de98e" exitCode=0 Jan 23 09:13:00 crc kubenswrapper[4982]: I0123 09:13:00.982953 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqmg7" event={"ID":"e227a2fe-1594-4838-b6ed-408bb1730273","Type":"ContainerDied","Data":"60237c114b33c30ce4f20a64901658fac45d78bfb321dfea8e98f209595de98e"} Jan 23 09:13:01 crc kubenswrapper[4982]: I0123 09:13:01.992976 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqmg7" event={"ID":"e227a2fe-1594-4838-b6ed-408bb1730273","Type":"ContainerStarted","Data":"181de8ab276b3c8ef63fd446121e86839d37d2ef5c74f1fd72d8471bd72b6659"} Jan 23 09:13:02 crc kubenswrapper[4982]: I0123 09:13:02.019929 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqmg7" podStartSLOduration=2.465774459 podStartE2EDuration="8.019911294s" podCreationTimestamp="2026-01-23 09:12:54 +0000 UTC" firstStartedPulling="2026-01-23 09:12:55.928352006 +0000 UTC m=+4650.408715392" lastFinishedPulling="2026-01-23 09:13:01.482488841 +0000 UTC m=+4655.962852227" observedRunningTime="2026-01-23 09:13:02.013852642 +0000 UTC m=+4656.494216028" watchObservedRunningTime="2026-01-23 09:13:02.019911294 +0000 UTC m=+4656.500274670" Jan 23 09:13:05 crc kubenswrapper[4982]: I0123 09:13:05.174054 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:13:05 crc kubenswrapper[4982]: I0123 09:13:05.175407 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:13:05 crc kubenswrapper[4982]: I0123 09:13:05.223311 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:13:06 crc kubenswrapper[4982]: I0123 09:13:06.077751 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqmg7" Jan 23 09:13:06 crc kubenswrapper[4982]: I0123 09:13:06.169968 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqmg7"] Jan 23 09:13:06 crc kubenswrapper[4982]: I0123 09:13:06.219365 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pqbt"] Jan 23 09:13:06 crc kubenswrapper[4982]: I0123 09:13:06.219970 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5pqbt" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="registry-server" containerID="cri-o://920ef0d8e4c64224ec06c7d8b192c386c6cedbae38822b844015951509a918ed" gracePeriod=2 Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.040495 4982 generic.go:334] "Generic (PLEG): container finished" podID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerID="920ef0d8e4c64224ec06c7d8b192c386c6cedbae38822b844015951509a918ed" exitCode=0 Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.040591 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pqbt" event={"ID":"2678d34c-2a1d-43ae-bc45-7c7763a89190","Type":"ContainerDied","Data":"920ef0d8e4c64224ec06c7d8b192c386c6cedbae38822b844015951509a918ed"} Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.162585 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.223946 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-utilities\") pod \"2678d34c-2a1d-43ae-bc45-7c7763a89190\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.224070 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-catalog-content\") pod \"2678d34c-2a1d-43ae-bc45-7c7763a89190\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.224116 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfzj4\" (UniqueName: \"kubernetes.io/projected/2678d34c-2a1d-43ae-bc45-7c7763a89190-kube-api-access-tfzj4\") pod \"2678d34c-2a1d-43ae-bc45-7c7763a89190\" (UID: \"2678d34c-2a1d-43ae-bc45-7c7763a89190\") " Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.224512 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-utilities" (OuterVolumeSpecName: "utilities") pod "2678d34c-2a1d-43ae-bc45-7c7763a89190" (UID: "2678d34c-2a1d-43ae-bc45-7c7763a89190"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.229656 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2678d34c-2a1d-43ae-bc45-7c7763a89190-kube-api-access-tfzj4" (OuterVolumeSpecName: "kube-api-access-tfzj4") pod "2678d34c-2a1d-43ae-bc45-7c7763a89190" (UID: "2678d34c-2a1d-43ae-bc45-7c7763a89190"). InnerVolumeSpecName "kube-api-access-tfzj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.274895 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2678d34c-2a1d-43ae-bc45-7c7763a89190" (UID: "2678d34c-2a1d-43ae-bc45-7c7763a89190"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.326191 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.326235 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2678d34c-2a1d-43ae-bc45-7c7763a89190-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:07 crc kubenswrapper[4982]: I0123 09:13:07.326249 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfzj4\" (UniqueName: \"kubernetes.io/projected/2678d34c-2a1d-43ae-bc45-7c7763a89190-kube-api-access-tfzj4\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:08 crc kubenswrapper[4982]: I0123 09:13:08.054113 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pqbt" event={"ID":"2678d34c-2a1d-43ae-bc45-7c7763a89190","Type":"ContainerDied","Data":"f44d437db05246404b081c988f8444dcd43e70645eaf44c0c7e9d3a8a9b50243"} Jan 23 09:13:08 crc kubenswrapper[4982]: I0123 09:13:08.054193 4982 scope.go:117] "RemoveContainer" containerID="920ef0d8e4c64224ec06c7d8b192c386c6cedbae38822b844015951509a918ed" Jan 23 09:13:08 crc kubenswrapper[4982]: I0123 09:13:08.054131 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pqbt" Jan 23 09:13:08 crc kubenswrapper[4982]: I0123 09:13:08.077177 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pqbt"] Jan 23 09:13:08 crc kubenswrapper[4982]: I0123 09:13:08.079059 4982 scope.go:117] "RemoveContainer" containerID="7b160df9479a77e9bc75e31a842de3cb1df5b76c90e79c6b86de24c0445f669a" Jan 23 09:13:08 crc kubenswrapper[4982]: I0123 09:13:08.084230 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5pqbt"] Jan 23 09:13:08 crc kubenswrapper[4982]: I0123 09:13:08.102200 4982 scope.go:117] "RemoveContainer" containerID="b5fd32d3f33f9b304d865ee1dc5bd1638c725c970a6e81a328cc59fd01ba6cb1" Jan 23 09:13:09 crc kubenswrapper[4982]: I0123 09:13:09.837375 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" path="/var/lib/kubelet/pods/2678d34c-2a1d-43ae-bc45-7c7763a89190/volumes" Jan 23 09:13:11 crc kubenswrapper[4982]: I0123 09:13:11.827345 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:13:13 crc kubenswrapper[4982]: I0123 09:13:13.095733 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"3382718d5d51a4f56a70eff82f0c152683ad214583ff18c7e8933d4528a378de"} Jan 23 09:13:53 crc kubenswrapper[4982]: I0123 09:13:53.920582 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-httnl"] Jan 23 09:13:53 crc kubenswrapper[4982]: E0123 09:13:53.924731 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="extract-content" Jan 23 09:13:53 crc kubenswrapper[4982]: I0123 09:13:53.924770 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="extract-content" Jan 23 09:13:53 crc kubenswrapper[4982]: E0123 09:13:53.924819 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="extract-utilities" Jan 23 09:13:53 crc kubenswrapper[4982]: I0123 09:13:53.924826 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="extract-utilities" Jan 23 09:13:53 crc kubenswrapper[4982]: E0123 09:13:53.924837 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="registry-server" Jan 23 09:13:53 crc kubenswrapper[4982]: I0123 09:13:53.924843 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="registry-server" Jan 23 09:13:53 crc kubenswrapper[4982]: I0123 09:13:53.925119 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2678d34c-2a1d-43ae-bc45-7c7763a89190" containerName="registry-server" Jan 23 09:13:53 crc kubenswrapper[4982]: I0123 09:13:53.926353 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:53 crc kubenswrapper[4982]: I0123 09:13:53.929456 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-httnl"] Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.072297 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-utilities\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.072399 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2ljq\" (UniqueName: \"kubernetes.io/projected/3f9f455b-5a39-4b10-9431-28439b818d1c-kube-api-access-m2ljq\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.072487 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-catalog-content\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.173429 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-catalog-content\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.173511 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-utilities\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.173575 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2ljq\" (UniqueName: \"kubernetes.io/projected/3f9f455b-5a39-4b10-9431-28439b818d1c-kube-api-access-m2ljq\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.173963 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-catalog-content\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.174121 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-utilities\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.193512 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2ljq\" (UniqueName: \"kubernetes.io/projected/3f9f455b-5a39-4b10-9431-28439b818d1c-kube-api-access-m2ljq\") pod \"redhat-marketplace-httnl\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.300209 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:13:54 crc kubenswrapper[4982]: I0123 09:13:54.759886 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-httnl"] Jan 23 09:13:55 crc kubenswrapper[4982]: I0123 09:13:55.457017 4982 generic.go:334] "Generic (PLEG): container finished" podID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerID="83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba" exitCode=0 Jan 23 09:13:55 crc kubenswrapper[4982]: I0123 09:13:55.457117 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-httnl" event={"ID":"3f9f455b-5a39-4b10-9431-28439b818d1c","Type":"ContainerDied","Data":"83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba"} Jan 23 09:13:55 crc kubenswrapper[4982]: I0123 09:13:55.457364 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-httnl" event={"ID":"3f9f455b-5a39-4b10-9431-28439b818d1c","Type":"ContainerStarted","Data":"3f5920cd096563ac770806137d0d86d9881d77df650993ef30d4c46ff994cad7"} Jan 23 09:13:57 crc kubenswrapper[4982]: I0123 09:13:57.475022 4982 generic.go:334] "Generic (PLEG): container finished" podID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerID="0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff" exitCode=0 Jan 23 09:13:57 crc kubenswrapper[4982]: I0123 09:13:57.475142 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-httnl" event={"ID":"3f9f455b-5a39-4b10-9431-28439b818d1c","Type":"ContainerDied","Data":"0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff"} Jan 23 09:13:58 crc kubenswrapper[4982]: I0123 09:13:58.485238 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-httnl" event={"ID":"3f9f455b-5a39-4b10-9431-28439b818d1c","Type":"ContainerStarted","Data":"4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9"} Jan 23 09:13:58 crc kubenswrapper[4982]: I0123 09:13:58.504804 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-httnl" podStartSLOduration=3.049165299 podStartE2EDuration="5.504782995s" podCreationTimestamp="2026-01-23 09:13:53 +0000 UTC" firstStartedPulling="2026-01-23 09:13:55.460349227 +0000 UTC m=+4709.940712613" lastFinishedPulling="2026-01-23 09:13:57.915966923 +0000 UTC m=+4712.396330309" observedRunningTime="2026-01-23 09:13:58.499618766 +0000 UTC m=+4712.979982162" watchObservedRunningTime="2026-01-23 09:13:58.504782995 +0000 UTC m=+4712.985146391" Jan 23 09:14:04 crc kubenswrapper[4982]: I0123 09:14:04.301242 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:14:04 crc kubenswrapper[4982]: I0123 09:14:04.301921 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:14:04 crc kubenswrapper[4982]: I0123 09:14:04.363690 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:14:04 crc kubenswrapper[4982]: I0123 09:14:04.587231 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:14:04 crc kubenswrapper[4982]: I0123 09:14:04.639793 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-httnl"] Jan 23 09:14:06 crc kubenswrapper[4982]: I0123 09:14:06.546964 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-httnl" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="registry-server" containerID="cri-o://4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9" gracePeriod=2 Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.270994 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.369994 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-utilities\") pod \"3f9f455b-5a39-4b10-9431-28439b818d1c\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.370075 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-catalog-content\") pod \"3f9f455b-5a39-4b10-9431-28439b818d1c\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.370200 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2ljq\" (UniqueName: \"kubernetes.io/projected/3f9f455b-5a39-4b10-9431-28439b818d1c-kube-api-access-m2ljq\") pod \"3f9f455b-5a39-4b10-9431-28439b818d1c\" (UID: \"3f9f455b-5a39-4b10-9431-28439b818d1c\") " Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.371125 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-utilities" (OuterVolumeSpecName: "utilities") pod "3f9f455b-5a39-4b10-9431-28439b818d1c" (UID: "3f9f455b-5a39-4b10-9431-28439b818d1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.376369 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9f455b-5a39-4b10-9431-28439b818d1c-kube-api-access-m2ljq" (OuterVolumeSpecName: "kube-api-access-m2ljq") pod "3f9f455b-5a39-4b10-9431-28439b818d1c" (UID: "3f9f455b-5a39-4b10-9431-28439b818d1c"). InnerVolumeSpecName "kube-api-access-m2ljq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.394276 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f9f455b-5a39-4b10-9431-28439b818d1c" (UID: "3f9f455b-5a39-4b10-9431-28439b818d1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.472431 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2ljq\" (UniqueName: \"kubernetes.io/projected/3f9f455b-5a39-4b10-9431-28439b818d1c-kube-api-access-m2ljq\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.472473 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.472483 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9f455b-5a39-4b10-9431-28439b818d1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.556424 4982 generic.go:334] "Generic (PLEG): container finished" podID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerID="4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9" exitCode=0 Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.556478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-httnl" event={"ID":"3f9f455b-5a39-4b10-9431-28439b818d1c","Type":"ContainerDied","Data":"4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9"} Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.556505 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-httnl" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.556518 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-httnl" event={"ID":"3f9f455b-5a39-4b10-9431-28439b818d1c","Type":"ContainerDied","Data":"3f5920cd096563ac770806137d0d86d9881d77df650993ef30d4c46ff994cad7"} Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.556542 4982 scope.go:117] "RemoveContainer" containerID="4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.588907 4982 scope.go:117] "RemoveContainer" containerID="0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.598421 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-httnl"] Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.605420 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-httnl"] Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.617412 4982 scope.go:117] "RemoveContainer" containerID="83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.640671 4982 scope.go:117] "RemoveContainer" containerID="4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9" Jan 23 09:14:07 crc kubenswrapper[4982]: E0123 09:14:07.641099 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9\": container with ID starting with 4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9 not found: ID does not exist" containerID="4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.641137 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9"} err="failed to get container status \"4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9\": rpc error: code = NotFound desc = could not find container \"4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9\": container with ID starting with 4279bfe29853e3e2d3074a9b0f1fb7cbc383493ea279c758181a57830619a2b9 not found: ID does not exist" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.641165 4982 scope.go:117] "RemoveContainer" containerID="0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff" Jan 23 09:14:07 crc kubenswrapper[4982]: E0123 09:14:07.641451 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff\": container with ID starting with 0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff not found: ID does not exist" containerID="0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.641477 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff"} err="failed to get container status \"0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff\": rpc error: code = NotFound desc = could not find container \"0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff\": container with ID starting with 0e5864f7a6b3d6c39dc14823ffb713c048be04f72577067ed7d63758df1ff6ff not found: ID does not exist" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.641495 4982 scope.go:117] "RemoveContainer" containerID="83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba" Jan 23 09:14:07 crc kubenswrapper[4982]: E0123 09:14:07.641871 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba\": container with ID starting with 83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba not found: ID does not exist" containerID="83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.641896 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba"} err="failed to get container status \"83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba\": rpc error: code = NotFound desc = could not find container \"83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba\": container with ID starting with 83403eddab1c215a417c45f1129e30b6ec8edecc44cd321241647f57ea735aba not found: ID does not exist" Jan 23 09:14:07 crc kubenswrapper[4982]: I0123 09:14:07.838815 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" path="/var/lib/kubelet/pods/3f9f455b-5a39-4b10-9431-28439b818d1c/volumes" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.327459 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdd4w"] Jan 23 09:14:58 crc kubenswrapper[4982]: E0123 09:14:58.328375 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="registry-server" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.328392 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="registry-server" Jan 23 09:14:58 crc kubenswrapper[4982]: E0123 09:14:58.328410 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="extract-content" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.328417 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="extract-content" Jan 23 09:14:58 crc kubenswrapper[4982]: E0123 09:14:58.328439 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="extract-utilities" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.328447 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="extract-utilities" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.328666 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9f455b-5a39-4b10-9431-28439b818d1c" containerName="registry-server" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.330049 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.344756 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdd4w"] Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.440602 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66rr5\" (UniqueName: \"kubernetes.io/projected/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-kube-api-access-66rr5\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.440868 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-utilities\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.441069 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-catalog-content\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.542754 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-catalog-content\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.542908 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66rr5\" (UniqueName: \"kubernetes.io/projected/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-kube-api-access-66rr5\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.542942 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-utilities\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.543471 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-utilities\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.543474 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-catalog-content\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.567237 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66rr5\" (UniqueName: \"kubernetes.io/projected/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-kube-api-access-66rr5\") pod \"redhat-operators-mdd4w\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:58 crc kubenswrapper[4982]: I0123 09:14:58.657335 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:14:59 crc kubenswrapper[4982]: I0123 09:14:59.137235 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdd4w"] Jan 23 09:14:59 crc kubenswrapper[4982]: I0123 09:14:59.991897 4982 generic.go:334] "Generic (PLEG): container finished" podID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerID="3dbe133e74efde52c61a7865ba9a87787956a3145188003e507917dc3ebeb266" exitCode=0 Jan 23 09:14:59 crc kubenswrapper[4982]: I0123 09:14:59.991950 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdd4w" event={"ID":"c92bfe44-3caa-4ae2-a676-41621b6cf6fd","Type":"ContainerDied","Data":"3dbe133e74efde52c61a7865ba9a87787956a3145188003e507917dc3ebeb266"} Jan 23 09:14:59 crc kubenswrapper[4982]: I0123 09:14:59.991979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdd4w" event={"ID":"c92bfe44-3caa-4ae2-a676-41621b6cf6fd","Type":"ContainerStarted","Data":"90794cabb206357edd71902a38a7c5416f7c75e9dfa664fd1f54118d53c7c36a"} Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.138842 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d"] Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.140189 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.142768 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.143000 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.149613 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d"] Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.269291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe6e8b15-b430-41b2-81aa-5a52a6421f70-secret-volume\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.269381 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe6e8b15-b430-41b2-81aa-5a52a6421f70-config-volume\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.269523 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdx5\" (UniqueName: \"kubernetes.io/projected/fe6e8b15-b430-41b2-81aa-5a52a6421f70-kube-api-access-rxdx5\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.371430 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe6e8b15-b430-41b2-81aa-5a52a6421f70-secret-volume\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.371522 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe6e8b15-b430-41b2-81aa-5a52a6421f70-config-volume\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.371638 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdx5\" (UniqueName: \"kubernetes.io/projected/fe6e8b15-b430-41b2-81aa-5a52a6421f70-kube-api-access-rxdx5\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.373650 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe6e8b15-b430-41b2-81aa-5a52a6421f70-config-volume\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.379007 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe6e8b15-b430-41b2-81aa-5a52a6421f70-secret-volume\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.393474 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdx5\" (UniqueName: \"kubernetes.io/projected/fe6e8b15-b430-41b2-81aa-5a52a6421f70-kube-api-access-rxdx5\") pod \"collect-profiles-29485995-6fj4d\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.461163 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:00 crc kubenswrapper[4982]: I0123 09:15:00.900242 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d"] Jan 23 09:15:00 crc kubenswrapper[4982]: W0123 09:15:00.905464 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe6e8b15_b430_41b2_81aa_5a52a6421f70.slice/crio-643b7660cde517570c8473318d3a5d62eedbbd99213725ff123cd34d69cb5dd9 WatchSource:0}: Error finding container 643b7660cde517570c8473318d3a5d62eedbbd99213725ff123cd34d69cb5dd9: Status 404 returned error can't find the container with id 643b7660cde517570c8473318d3a5d62eedbbd99213725ff123cd34d69cb5dd9 Jan 23 09:15:01 crc kubenswrapper[4982]: I0123 09:15:01.001743 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" event={"ID":"fe6e8b15-b430-41b2-81aa-5a52a6421f70","Type":"ContainerStarted","Data":"643b7660cde517570c8473318d3a5d62eedbbd99213725ff123cd34d69cb5dd9"} Jan 23 09:15:02 crc kubenswrapper[4982]: I0123 09:15:02.020360 4982 generic.go:334] "Generic (PLEG): container finished" podID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerID="db014ab123e81ccd46c0e45962d42f2cbb48b34ccf2a261f5c6c1439a9861c6a" exitCode=0 Jan 23 09:15:02 crc kubenswrapper[4982]: I0123 09:15:02.020417 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdd4w" event={"ID":"c92bfe44-3caa-4ae2-a676-41621b6cf6fd","Type":"ContainerDied","Data":"db014ab123e81ccd46c0e45962d42f2cbb48b34ccf2a261f5c6c1439a9861c6a"} Jan 23 09:15:02 crc kubenswrapper[4982]: I0123 09:15:02.024154 4982 generic.go:334] "Generic (PLEG): container finished" podID="fe6e8b15-b430-41b2-81aa-5a52a6421f70" containerID="f996d8dd6e3353f272cb6233cc6fd25c99f68b7f7164bd38b791099048b4a806" exitCode=0 Jan 23 09:15:02 crc kubenswrapper[4982]: I0123 09:15:02.024225 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" event={"ID":"fe6e8b15-b430-41b2-81aa-5a52a6421f70","Type":"ContainerDied","Data":"f996d8dd6e3353f272cb6233cc6fd25c99f68b7f7164bd38b791099048b4a806"} Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.298231 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.426351 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe6e8b15-b430-41b2-81aa-5a52a6421f70-config-volume\") pod \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.426424 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe6e8b15-b430-41b2-81aa-5a52a6421f70-secret-volume\") pod \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.426465 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxdx5\" (UniqueName: \"kubernetes.io/projected/fe6e8b15-b430-41b2-81aa-5a52a6421f70-kube-api-access-rxdx5\") pod \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\" (UID: \"fe6e8b15-b430-41b2-81aa-5a52a6421f70\") " Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.427865 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6e8b15-b430-41b2-81aa-5a52a6421f70-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe6e8b15-b430-41b2-81aa-5a52a6421f70" (UID: "fe6e8b15-b430-41b2-81aa-5a52a6421f70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.434387 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6e8b15-b430-41b2-81aa-5a52a6421f70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe6e8b15-b430-41b2-81aa-5a52a6421f70" (UID: "fe6e8b15-b430-41b2-81aa-5a52a6421f70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.434450 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6e8b15-b430-41b2-81aa-5a52a6421f70-kube-api-access-rxdx5" (OuterVolumeSpecName: "kube-api-access-rxdx5") pod "fe6e8b15-b430-41b2-81aa-5a52a6421f70" (UID: "fe6e8b15-b430-41b2-81aa-5a52a6421f70"). InnerVolumeSpecName "kube-api-access-rxdx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.528283 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe6e8b15-b430-41b2-81aa-5a52a6421f70-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.528327 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe6e8b15-b430-41b2-81aa-5a52a6421f70-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:03 crc kubenswrapper[4982]: I0123 09:15:03.528338 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxdx5\" (UniqueName: \"kubernetes.io/projected/fe6e8b15-b430-41b2-81aa-5a52a6421f70-kube-api-access-rxdx5\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:04 crc kubenswrapper[4982]: I0123 09:15:04.039221 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" Jan 23 09:15:04 crc kubenswrapper[4982]: I0123 09:15:04.039207 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d" event={"ID":"fe6e8b15-b430-41b2-81aa-5a52a6421f70","Type":"ContainerDied","Data":"643b7660cde517570c8473318d3a5d62eedbbd99213725ff123cd34d69cb5dd9"} Jan 23 09:15:04 crc kubenswrapper[4982]: I0123 09:15:04.039673 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="643b7660cde517570c8473318d3a5d62eedbbd99213725ff123cd34d69cb5dd9" Jan 23 09:15:04 crc kubenswrapper[4982]: I0123 09:15:04.043873 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdd4w" event={"ID":"c92bfe44-3caa-4ae2-a676-41621b6cf6fd","Type":"ContainerStarted","Data":"61ab35efaa79241d71b20dc26c7de6c5a4aea5697745127de5887dc6117e69a2"} Jan 23 09:15:04 crc kubenswrapper[4982]: I0123 09:15:04.068378 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdd4w" podStartSLOduration=2.662816502 podStartE2EDuration="6.068343254s" podCreationTimestamp="2026-01-23 09:14:58 +0000 UTC" firstStartedPulling="2026-01-23 09:14:59.993929823 +0000 UTC m=+4774.474293209" lastFinishedPulling="2026-01-23 09:15:03.399456575 +0000 UTC m=+4777.879819961" observedRunningTime="2026-01-23 09:15:04.067151062 +0000 UTC m=+4778.547514468" watchObservedRunningTime="2026-01-23 09:15:04.068343254 +0000 UTC m=+4778.548706650" Jan 23 09:15:04 crc kubenswrapper[4982]: I0123 09:15:04.387846 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8"] Jan 23 09:15:04 crc kubenswrapper[4982]: I0123 09:15:04.397022 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485950-glxt8"] Jan 23 09:15:05 crc kubenswrapper[4982]: I0123 09:15:05.871674 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cfa26e-8192-42fa-8290-86523a48a7ec" path="/var/lib/kubelet/pods/08cfa26e-8192-42fa-8290-86523a48a7ec/volumes" Jan 23 09:15:08 crc kubenswrapper[4982]: I0123 09:15:08.657917 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:15:08 crc kubenswrapper[4982]: I0123 09:15:08.658444 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:15:09 crc kubenswrapper[4982]: I0123 09:15:09.786721 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mdd4w" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="registry-server" probeResult="failure" output=< Jan 23 09:15:09 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 09:15:09 crc kubenswrapper[4982]: > Jan 23 09:15:18 crc kubenswrapper[4982]: I0123 09:15:18.697487 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:15:18 crc kubenswrapper[4982]: I0123 09:15:18.746633 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:15:18 crc kubenswrapper[4982]: I0123 09:15:18.939031 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdd4w"] Jan 23 09:15:20 crc kubenswrapper[4982]: I0123 09:15:20.178889 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mdd4w" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="registry-server" containerID="cri-o://61ab35efaa79241d71b20dc26c7de6c5a4aea5697745127de5887dc6117e69a2" gracePeriod=2 Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.197344 4982 generic.go:334] "Generic (PLEG): container finished" podID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerID="61ab35efaa79241d71b20dc26c7de6c5a4aea5697745127de5887dc6117e69a2" exitCode=0 Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.197405 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdd4w" event={"ID":"c92bfe44-3caa-4ae2-a676-41621b6cf6fd","Type":"ContainerDied","Data":"61ab35efaa79241d71b20dc26c7de6c5a4aea5697745127de5887dc6117e69a2"} Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.526591 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.630353 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-utilities\") pod \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.630433 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-catalog-content\") pod \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.630511 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66rr5\" (UniqueName: \"kubernetes.io/projected/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-kube-api-access-66rr5\") pod \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\" (UID: \"c92bfe44-3caa-4ae2-a676-41621b6cf6fd\") " Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.631652 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-utilities" (OuterVolumeSpecName: "utilities") pod "c92bfe44-3caa-4ae2-a676-41621b6cf6fd" (UID: "c92bfe44-3caa-4ae2-a676-41621b6cf6fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.638010 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-kube-api-access-66rr5" (OuterVolumeSpecName: "kube-api-access-66rr5") pod "c92bfe44-3caa-4ae2-a676-41621b6cf6fd" (UID: "c92bfe44-3caa-4ae2-a676-41621b6cf6fd"). InnerVolumeSpecName "kube-api-access-66rr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.732438 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.732478 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66rr5\" (UniqueName: \"kubernetes.io/projected/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-kube-api-access-66rr5\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.767652 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c92bfe44-3caa-4ae2-a676-41621b6cf6fd" (UID: "c92bfe44-3caa-4ae2-a676-41621b6cf6fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:15:22 crc kubenswrapper[4982]: I0123 09:15:22.833659 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c92bfe44-3caa-4ae2-a676-41621b6cf6fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.207939 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdd4w" event={"ID":"c92bfe44-3caa-4ae2-a676-41621b6cf6fd","Type":"ContainerDied","Data":"90794cabb206357edd71902a38a7c5416f7c75e9dfa664fd1f54118d53c7c36a"} Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.208614 4982 scope.go:117] "RemoveContainer" containerID="61ab35efaa79241d71b20dc26c7de6c5a4aea5697745127de5887dc6117e69a2" Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.208018 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdd4w" Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.228361 4982 scope.go:117] "RemoveContainer" containerID="db014ab123e81ccd46c0e45962d42f2cbb48b34ccf2a261f5c6c1439a9861c6a" Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.247242 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mdd4w"] Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.254755 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mdd4w"] Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.266456 4982 scope.go:117] "RemoveContainer" containerID="3dbe133e74efde52c61a7865ba9a87787956a3145188003e507917dc3ebeb266" Jan 23 09:15:23 crc kubenswrapper[4982]: I0123 09:15:23.838045 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" path="/var/lib/kubelet/pods/c92bfe44-3caa-4ae2-a676-41621b6cf6fd/volumes" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.960222 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qd8b4"] Jan 23 09:15:31 crc kubenswrapper[4982]: E0123 09:15:31.961102 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="registry-server" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.961115 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="registry-server" Jan 23 09:15:31 crc kubenswrapper[4982]: E0123 09:15:31.961131 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6e8b15-b430-41b2-81aa-5a52a6421f70" containerName="collect-profiles" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.961138 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6e8b15-b430-41b2-81aa-5a52a6421f70" containerName="collect-profiles" Jan 23 09:15:31 crc kubenswrapper[4982]: E0123 09:15:31.961148 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="extract-utilities" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.961154 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="extract-utilities" Jan 23 09:15:31 crc kubenswrapper[4982]: E0123 09:15:31.961168 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="extract-content" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.961174 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="extract-content" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.961339 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6e8b15-b430-41b2-81aa-5a52a6421f70" containerName="collect-profiles" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.961350 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c92bfe44-3caa-4ae2-a676-41621b6cf6fd" containerName="registry-server" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.962491 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.971517 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-catalog-content\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.971581 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-utilities\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.971728 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mft5b\" (UniqueName: \"kubernetes.io/projected/60de3469-099a-4945-8ee3-c25b1558ea37-kube-api-access-mft5b\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:31 crc kubenswrapper[4982]: I0123 09:15:31.974944 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qd8b4"] Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.073184 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mft5b\" (UniqueName: \"kubernetes.io/projected/60de3469-099a-4945-8ee3-c25b1558ea37-kube-api-access-mft5b\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.073318 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-catalog-content\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.073352 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-utilities\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.073869 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-utilities\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.074145 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-catalog-content\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.097297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mft5b\" (UniqueName: \"kubernetes.io/projected/60de3469-099a-4945-8ee3-c25b1558ea37-kube-api-access-mft5b\") pod \"community-operators-qd8b4\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.282252 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:32 crc kubenswrapper[4982]: I0123 09:15:32.804753 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qd8b4"] Jan 23 09:15:33 crc kubenswrapper[4982]: I0123 09:15:33.285664 4982 generic.go:334] "Generic (PLEG): container finished" podID="60de3469-099a-4945-8ee3-c25b1558ea37" containerID="a32ea53a2f385b74c4bd3fccd7d1eba542b62cd282c0abc8abd160ba97a6a819" exitCode=0 Jan 23 09:15:33 crc kubenswrapper[4982]: I0123 09:15:33.285978 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qd8b4" event={"ID":"60de3469-099a-4945-8ee3-c25b1558ea37","Type":"ContainerDied","Data":"a32ea53a2f385b74c4bd3fccd7d1eba542b62cd282c0abc8abd160ba97a6a819"} Jan 23 09:15:33 crc kubenswrapper[4982]: I0123 09:15:33.286011 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qd8b4" event={"ID":"60de3469-099a-4945-8ee3-c25b1558ea37","Type":"ContainerStarted","Data":"7cd286f956a6d3b51bbf3bf1cac9dc3d8b2b91404ca28b487e94cc70c2817e18"} Jan 23 09:15:34 crc kubenswrapper[4982]: I0123 09:15:34.298199 4982 generic.go:334] "Generic (PLEG): container finished" podID="60de3469-099a-4945-8ee3-c25b1558ea37" containerID="9589f0277da3d14cb850cf77f9ccf67d6ff937086ca8c7eb01c126910ee5e75b" exitCode=0 Jan 23 09:15:34 crc kubenswrapper[4982]: I0123 09:15:34.298277 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qd8b4" event={"ID":"60de3469-099a-4945-8ee3-c25b1558ea37","Type":"ContainerDied","Data":"9589f0277da3d14cb850cf77f9ccf67d6ff937086ca8c7eb01c126910ee5e75b"} Jan 23 09:15:35 crc kubenswrapper[4982]: I0123 09:15:35.310010 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qd8b4" event={"ID":"60de3469-099a-4945-8ee3-c25b1558ea37","Type":"ContainerStarted","Data":"d27330fce0a874a88f7cb06689757c8fcb481b9e41f4bf63b35294dd4928e8a0"} Jan 23 09:15:35 crc kubenswrapper[4982]: I0123 09:15:35.330700 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qd8b4" podStartSLOduration=2.608008989 podStartE2EDuration="4.330682838s" podCreationTimestamp="2026-01-23 09:15:31 +0000 UTC" firstStartedPulling="2026-01-23 09:15:33.287460262 +0000 UTC m=+4807.767823658" lastFinishedPulling="2026-01-23 09:15:35.010134121 +0000 UTC m=+4809.490497507" observedRunningTime="2026-01-23 09:15:35.326829091 +0000 UTC m=+4809.807192507" watchObservedRunningTime="2026-01-23 09:15:35.330682838 +0000 UTC m=+4809.811046224" Jan 23 09:15:41 crc kubenswrapper[4982]: I0123 09:15:41.435665 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:15:41 crc kubenswrapper[4982]: I0123 09:15:41.436219 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:15:42 crc kubenswrapper[4982]: I0123 09:15:42.283508 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:42 crc kubenswrapper[4982]: I0123 09:15:42.283818 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:42 crc kubenswrapper[4982]: I0123 09:15:42.327386 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:42 crc kubenswrapper[4982]: I0123 09:15:42.408182 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:42 crc kubenswrapper[4982]: I0123 09:15:42.572384 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qd8b4"] Jan 23 09:15:42 crc kubenswrapper[4982]: I0123 09:15:42.806945 4982 scope.go:117] "RemoveContainer" containerID="0f720a2146a6395810e2fbd6bc26758d4c522cb24a107d7b8f8b356ff3ca6b1b" Jan 23 09:15:44 crc kubenswrapper[4982]: I0123 09:15:44.379320 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qd8b4" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="registry-server" containerID="cri-o://d27330fce0a874a88f7cb06689757c8fcb481b9e41f4bf63b35294dd4928e8a0" gracePeriod=2 Jan 23 09:15:45 crc kubenswrapper[4982]: I0123 09:15:45.390699 4982 generic.go:334] "Generic (PLEG): container finished" podID="60de3469-099a-4945-8ee3-c25b1558ea37" containerID="d27330fce0a874a88f7cb06689757c8fcb481b9e41f4bf63b35294dd4928e8a0" exitCode=0 Jan 23 09:15:45 crc kubenswrapper[4982]: I0123 09:15:45.390749 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qd8b4" event={"ID":"60de3469-099a-4945-8ee3-c25b1558ea37","Type":"ContainerDied","Data":"d27330fce0a874a88f7cb06689757c8fcb481b9e41f4bf63b35294dd4928e8a0"} Jan 23 09:15:45 crc kubenswrapper[4982]: I0123 09:15:45.897152 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.011515 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-utilities\") pod \"60de3469-099a-4945-8ee3-c25b1558ea37\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.011557 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-catalog-content\") pod \"60de3469-099a-4945-8ee3-c25b1558ea37\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.011694 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mft5b\" (UniqueName: \"kubernetes.io/projected/60de3469-099a-4945-8ee3-c25b1558ea37-kube-api-access-mft5b\") pod \"60de3469-099a-4945-8ee3-c25b1558ea37\" (UID: \"60de3469-099a-4945-8ee3-c25b1558ea37\") " Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.012380 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-utilities" (OuterVolumeSpecName: "utilities") pod "60de3469-099a-4945-8ee3-c25b1558ea37" (UID: "60de3469-099a-4945-8ee3-c25b1558ea37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.018513 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60de3469-099a-4945-8ee3-c25b1558ea37-kube-api-access-mft5b" (OuterVolumeSpecName: "kube-api-access-mft5b") pod "60de3469-099a-4945-8ee3-c25b1558ea37" (UID: "60de3469-099a-4945-8ee3-c25b1558ea37"). InnerVolumeSpecName "kube-api-access-mft5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.063748 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60de3469-099a-4945-8ee3-c25b1558ea37" (UID: "60de3469-099a-4945-8ee3-c25b1558ea37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.115505 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mft5b\" (UniqueName: \"kubernetes.io/projected/60de3469-099a-4945-8ee3-c25b1558ea37-kube-api-access-mft5b\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.115541 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.115551 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60de3469-099a-4945-8ee3-c25b1558ea37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.400942 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qd8b4" event={"ID":"60de3469-099a-4945-8ee3-c25b1558ea37","Type":"ContainerDied","Data":"7cd286f956a6d3b51bbf3bf1cac9dc3d8b2b91404ca28b487e94cc70c2817e18"} Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.400975 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qd8b4" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.401220 4982 scope.go:117] "RemoveContainer" containerID="d27330fce0a874a88f7cb06689757c8fcb481b9e41f4bf63b35294dd4928e8a0" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.419389 4982 scope.go:117] "RemoveContainer" containerID="9589f0277da3d14cb850cf77f9ccf67d6ff937086ca8c7eb01c126910ee5e75b" Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.439977 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qd8b4"] Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.447397 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qd8b4"] Jan 23 09:15:46 crc kubenswrapper[4982]: I0123 09:15:46.454833 4982 scope.go:117] "RemoveContainer" containerID="a32ea53a2f385b74c4bd3fccd7d1eba542b62cd282c0abc8abd160ba97a6a819" Jan 23 09:15:47 crc kubenswrapper[4982]: I0123 09:15:47.840314 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" path="/var/lib/kubelet/pods/60de3469-099a-4945-8ee3-c25b1558ea37/volumes" Jan 23 09:16:11 crc kubenswrapper[4982]: I0123 09:16:11.435483 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:16:11 crc kubenswrapper[4982]: I0123 09:16:11.436033 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:16:41 crc kubenswrapper[4982]: I0123 09:16:41.435468 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:16:41 crc kubenswrapper[4982]: I0123 09:16:41.436022 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:16:41 crc kubenswrapper[4982]: I0123 09:16:41.436075 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:16:41 crc kubenswrapper[4982]: I0123 09:16:41.436763 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3382718d5d51a4f56a70eff82f0c152683ad214583ff18c7e8933d4528a378de"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:16:41 crc kubenswrapper[4982]: I0123 09:16:41.436818 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://3382718d5d51a4f56a70eff82f0c152683ad214583ff18c7e8933d4528a378de" gracePeriod=600 Jan 23 09:16:42 crc kubenswrapper[4982]: I0123 09:16:42.449692 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="3382718d5d51a4f56a70eff82f0c152683ad214583ff18c7e8933d4528a378de" exitCode=0 Jan 23 09:16:42 crc kubenswrapper[4982]: I0123 09:16:42.449815 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"3382718d5d51a4f56a70eff82f0c152683ad214583ff18c7e8933d4528a378de"} Jan 23 09:16:42 crc kubenswrapper[4982]: I0123 09:16:42.450994 4982 scope.go:117] "RemoveContainer" containerID="646824b8cb8fc40d12fc4cf56657ea1ce2b39dfbf8cc8e0bb14ba16cdbcaf561" Jan 23 09:16:42 crc kubenswrapper[4982]: I0123 09:16:42.450836 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf"} Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.279168 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-2d8ch"] Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.288568 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-2d8ch"] Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.429226 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-rcxqn"] Jan 23 09:17:12 crc kubenswrapper[4982]: E0123 09:17:12.429674 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="registry-server" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.429698 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="registry-server" Jan 23 09:17:12 crc kubenswrapper[4982]: E0123 09:17:12.429717 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="extract-content" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.429728 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="extract-content" Jan 23 09:17:12 crc kubenswrapper[4982]: E0123 09:17:12.429753 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="extract-utilities" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.429763 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="extract-utilities" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.429985 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="60de3469-099a-4945-8ee3-c25b1558ea37" containerName="registry-server" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.430659 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.434451 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.434648 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.435083 4982 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-wlphp" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.437073 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rcxqn"] Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.443354 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.462267 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mzx\" (UniqueName: \"kubernetes.io/projected/507ac411-0fb0-4bf9-809e-daa2c644e8e9-kube-api-access-l6mzx\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.462358 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/507ac411-0fb0-4bf9-809e-daa2c644e8e9-crc-storage\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.462398 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/507ac411-0fb0-4bf9-809e-daa2c644e8e9-node-mnt\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.563789 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/507ac411-0fb0-4bf9-809e-daa2c644e8e9-crc-storage\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.563848 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/507ac411-0fb0-4bf9-809e-daa2c644e8e9-node-mnt\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.563947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mzx\" (UniqueName: \"kubernetes.io/projected/507ac411-0fb0-4bf9-809e-daa2c644e8e9-kube-api-access-l6mzx\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.565113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/507ac411-0fb0-4bf9-809e-daa2c644e8e9-crc-storage\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.565356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/507ac411-0fb0-4bf9-809e-daa2c644e8e9-node-mnt\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.586764 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mzx\" (UniqueName: \"kubernetes.io/projected/507ac411-0fb0-4bf9-809e-daa2c644e8e9-kube-api-access-l6mzx\") pod \"crc-storage-crc-rcxqn\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:12 crc kubenswrapper[4982]: I0123 09:17:12.755260 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:13 crc kubenswrapper[4982]: I0123 09:17:13.178813 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-rcxqn"] Jan 23 09:17:13 crc kubenswrapper[4982]: I0123 09:17:13.692683 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rcxqn" event={"ID":"507ac411-0fb0-4bf9-809e-daa2c644e8e9","Type":"ContainerStarted","Data":"4fadea2d41aabf0cf5c59af5ea99edd5cb35d35ba1841127e6798b6e67978b29"} Jan 23 09:17:13 crc kubenswrapper[4982]: I0123 09:17:13.842510 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4fcaf92-86c7-45b6-83e5-e8f2fb77e687" path="/var/lib/kubelet/pods/a4fcaf92-86c7-45b6-83e5-e8f2fb77e687/volumes" Jan 23 09:17:14 crc kubenswrapper[4982]: I0123 09:17:14.703083 4982 generic.go:334] "Generic (PLEG): container finished" podID="507ac411-0fb0-4bf9-809e-daa2c644e8e9" containerID="4cb6e01d4590f9238e1a8e5bb2978854e9950c7d5e72285195818b8ca3e1abf5" exitCode=0 Jan 23 09:17:14 crc kubenswrapper[4982]: I0123 09:17:14.703157 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rcxqn" event={"ID":"507ac411-0fb0-4bf9-809e-daa2c644e8e9","Type":"ContainerDied","Data":"4cb6e01d4590f9238e1a8e5bb2978854e9950c7d5e72285195818b8ca3e1abf5"} Jan 23 09:17:15 crc kubenswrapper[4982]: I0123 09:17:15.982375 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.019246 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/507ac411-0fb0-4bf9-809e-daa2c644e8e9-node-mnt\") pod \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.019489 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6mzx\" (UniqueName: \"kubernetes.io/projected/507ac411-0fb0-4bf9-809e-daa2c644e8e9-kube-api-access-l6mzx\") pod \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.019557 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/507ac411-0fb0-4bf9-809e-daa2c644e8e9-crc-storage\") pod \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\" (UID: \"507ac411-0fb0-4bf9-809e-daa2c644e8e9\") " Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.020903 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/507ac411-0fb0-4bf9-809e-daa2c644e8e9-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "507ac411-0fb0-4bf9-809e-daa2c644e8e9" (UID: "507ac411-0fb0-4bf9-809e-daa2c644e8e9"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.025094 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507ac411-0fb0-4bf9-809e-daa2c644e8e9-kube-api-access-l6mzx" (OuterVolumeSpecName: "kube-api-access-l6mzx") pod "507ac411-0fb0-4bf9-809e-daa2c644e8e9" (UID: "507ac411-0fb0-4bf9-809e-daa2c644e8e9"). InnerVolumeSpecName "kube-api-access-l6mzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.044127 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507ac411-0fb0-4bf9-809e-daa2c644e8e9-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "507ac411-0fb0-4bf9-809e-daa2c644e8e9" (UID: "507ac411-0fb0-4bf9-809e-daa2c644e8e9"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.121578 4982 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/507ac411-0fb0-4bf9-809e-daa2c644e8e9-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.121613 4982 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/507ac411-0fb0-4bf9-809e-daa2c644e8e9-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.121638 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6mzx\" (UniqueName: \"kubernetes.io/projected/507ac411-0fb0-4bf9-809e-daa2c644e8e9-kube-api-access-l6mzx\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.721009 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-rcxqn" event={"ID":"507ac411-0fb0-4bf9-809e-daa2c644e8e9","Type":"ContainerDied","Data":"4fadea2d41aabf0cf5c59af5ea99edd5cb35d35ba1841127e6798b6e67978b29"} Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.721054 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fadea2d41aabf0cf5c59af5ea99edd5cb35d35ba1841127e6798b6e67978b29" Jan 23 09:17:16 crc kubenswrapper[4982]: I0123 09:17:16.721096 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-rcxqn" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.289342 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-rcxqn"] Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.295559 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-rcxqn"] Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.444157 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-tvlgg"] Jan 23 09:17:18 crc kubenswrapper[4982]: E0123 09:17:18.444829 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507ac411-0fb0-4bf9-809e-daa2c644e8e9" containerName="storage" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.444852 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="507ac411-0fb0-4bf9-809e-daa2c644e8e9" containerName="storage" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.445063 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="507ac411-0fb0-4bf9-809e-daa2c644e8e9" containerName="storage" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.445559 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.447280 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.449140 4982 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-wlphp" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.449320 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.449440 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.457979 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tvlgg"] Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.563387 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc8qx\" (UniqueName: \"kubernetes.io/projected/13bda864-b495-4a04-b978-583f101c1449-kube-api-access-rc8qx\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.563617 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13bda864-b495-4a04-b978-583f101c1449-node-mnt\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.563814 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13bda864-b495-4a04-b978-583f101c1449-crc-storage\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.665259 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc8qx\" (UniqueName: \"kubernetes.io/projected/13bda864-b495-4a04-b978-583f101c1449-kube-api-access-rc8qx\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.665326 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13bda864-b495-4a04-b978-583f101c1449-node-mnt\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.665375 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13bda864-b495-4a04-b978-583f101c1449-crc-storage\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.665617 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13bda864-b495-4a04-b978-583f101c1449-node-mnt\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.666319 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13bda864-b495-4a04-b978-583f101c1449-crc-storage\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.683742 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc8qx\" (UniqueName: \"kubernetes.io/projected/13bda864-b495-4a04-b978-583f101c1449-kube-api-access-rc8qx\") pod \"crc-storage-crc-tvlgg\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:18 crc kubenswrapper[4982]: I0123 09:17:18.764379 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:19 crc kubenswrapper[4982]: I0123 09:17:19.190858 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tvlgg"] Jan 23 09:17:19 crc kubenswrapper[4982]: I0123 09:17:19.742286 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tvlgg" event={"ID":"13bda864-b495-4a04-b978-583f101c1449","Type":"ContainerStarted","Data":"267f49a5e241c9abf6095c8e4acdbfc9c27771a14f4121c1a289bc7156461198"} Jan 23 09:17:19 crc kubenswrapper[4982]: I0123 09:17:19.837284 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507ac411-0fb0-4bf9-809e-daa2c644e8e9" path="/var/lib/kubelet/pods/507ac411-0fb0-4bf9-809e-daa2c644e8e9/volumes" Jan 23 09:17:20 crc kubenswrapper[4982]: I0123 09:17:20.751714 4982 generic.go:334] "Generic (PLEG): container finished" podID="13bda864-b495-4a04-b978-583f101c1449" containerID="4dcaf374b032ffd4b8cea1678adf2b8f36ae49c45dc92bbaf217a2bff156cb56" exitCode=0 Jan 23 09:17:20 crc kubenswrapper[4982]: I0123 09:17:20.751761 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tvlgg" event={"ID":"13bda864-b495-4a04-b978-583f101c1449","Type":"ContainerDied","Data":"4dcaf374b032ffd4b8cea1678adf2b8f36ae49c45dc92bbaf217a2bff156cb56"} Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.065746 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.125047 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc8qx\" (UniqueName: \"kubernetes.io/projected/13bda864-b495-4a04-b978-583f101c1449-kube-api-access-rc8qx\") pod \"13bda864-b495-4a04-b978-583f101c1449\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.125094 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13bda864-b495-4a04-b978-583f101c1449-crc-storage\") pod \"13bda864-b495-4a04-b978-583f101c1449\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.125187 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13bda864-b495-4a04-b978-583f101c1449-node-mnt\") pod \"13bda864-b495-4a04-b978-583f101c1449\" (UID: \"13bda864-b495-4a04-b978-583f101c1449\") " Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.125901 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13bda864-b495-4a04-b978-583f101c1449-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "13bda864-b495-4a04-b978-583f101c1449" (UID: "13bda864-b495-4a04-b978-583f101c1449"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.134419 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bda864-b495-4a04-b978-583f101c1449-kube-api-access-rc8qx" (OuterVolumeSpecName: "kube-api-access-rc8qx") pod "13bda864-b495-4a04-b978-583f101c1449" (UID: "13bda864-b495-4a04-b978-583f101c1449"). InnerVolumeSpecName "kube-api-access-rc8qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.167428 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13bda864-b495-4a04-b978-583f101c1449-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "13bda864-b495-4a04-b978-583f101c1449" (UID: "13bda864-b495-4a04-b978-583f101c1449"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.228692 4982 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13bda864-b495-4a04-b978-583f101c1449-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.228745 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc8qx\" (UniqueName: \"kubernetes.io/projected/13bda864-b495-4a04-b978-583f101c1449-kube-api-access-rc8qx\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.228757 4982 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13bda864-b495-4a04-b978-583f101c1449-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.770407 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tvlgg" event={"ID":"13bda864-b495-4a04-b978-583f101c1449","Type":"ContainerDied","Data":"267f49a5e241c9abf6095c8e4acdbfc9c27771a14f4121c1a289bc7156461198"} Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.770448 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267f49a5e241c9abf6095c8e4acdbfc9c27771a14f4121c1a289bc7156461198" Jan 23 09:17:22 crc kubenswrapper[4982]: I0123 09:17:22.770502 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tvlgg" Jan 23 09:17:42 crc kubenswrapper[4982]: I0123 09:17:42.902696 4982 scope.go:117] "RemoveContainer" containerID="5805d626c5595e0cd459655977bc2a4b6c2fbc057a2a92f4254ad0cbe43c1aa2" Jan 23 09:18:41 crc kubenswrapper[4982]: I0123 09:18:41.435682 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:18:41 crc kubenswrapper[4982]: I0123 09:18:41.436273 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:19:11 crc kubenswrapper[4982]: I0123 09:19:11.436239 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:19:11 crc kubenswrapper[4982]: I0123 09:19:11.436829 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.430666 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-7v6jt"] Jan 23 09:19:22 crc kubenswrapper[4982]: E0123 09:19:22.432719 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bda864-b495-4a04-b978-583f101c1449" containerName="storage" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.432833 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bda864-b495-4a04-b978-583f101c1449" containerName="storage" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.433124 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bda864-b495-4a04-b978-583f101c1449" containerName="storage" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.434224 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.439468 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.439724 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.439484 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-87b86" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.439904 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.440140 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.447455 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-ltbxf"] Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.455837 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.469025 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-7v6jt"] Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.482286 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-ltbxf"] Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.579228 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-config\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.579652 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sg76\" (UniqueName: \"kubernetes.io/projected/ad5c25eb-1576-454e-925d-cdbd0cb36566-kube-api-access-4sg76\") pod \"dnsmasq-dns-5986db9b4f-ltbxf\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.579724 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qph\" (UniqueName: \"kubernetes.io/projected/2e972788-2448-4898-841b-ac9eb33ef5b4-kube-api-access-l8qph\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.579749 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.579922 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5c25eb-1576-454e-925d-cdbd0cb36566-config\") pod \"dnsmasq-dns-5986db9b4f-ltbxf\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.680996 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sg76\" (UniqueName: \"kubernetes.io/projected/ad5c25eb-1576-454e-925d-cdbd0cb36566-kube-api-access-4sg76\") pod \"dnsmasq-dns-5986db9b4f-ltbxf\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.681042 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qph\" (UniqueName: \"kubernetes.io/projected/2e972788-2448-4898-841b-ac9eb33ef5b4-kube-api-access-l8qph\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.681063 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.681131 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5c25eb-1576-454e-925d-cdbd0cb36566-config\") pod \"dnsmasq-dns-5986db9b4f-ltbxf\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.681164 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-config\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.682088 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-config\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.682126 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-dns-svc\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.682299 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5c25eb-1576-454e-925d-cdbd0cb36566-config\") pod \"dnsmasq-dns-5986db9b4f-ltbxf\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.715429 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qph\" (UniqueName: \"kubernetes.io/projected/2e972788-2448-4898-841b-ac9eb33ef5b4-kube-api-access-l8qph\") pod \"dnsmasq-dns-56bbd59dc5-7v6jt\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.716829 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sg76\" (UniqueName: \"kubernetes.io/projected/ad5c25eb-1576-454e-925d-cdbd0cb36566-kube-api-access-4sg76\") pod \"dnsmasq-dns-5986db9b4f-ltbxf\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.768179 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.785063 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.791051 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-7v6jt"] Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.819419 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-j5gqm"] Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.821030 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.871750 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-j5gqm"] Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.986887 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gznv5\" (UniqueName: \"kubernetes.io/projected/00a8e916-fd2d-4105-a472-587f834c2690-kube-api-access-gznv5\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.986997 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-config\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:22 crc kubenswrapper[4982]: I0123 09:19:22.987080 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-dns-svc\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.090075 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-config\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.090147 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-dns-svc\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.090189 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gznv5\" (UniqueName: \"kubernetes.io/projected/00a8e916-fd2d-4105-a472-587f834c2690-kube-api-access-gznv5\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.091682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-config\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.092215 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-dns-svc\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.107918 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-ltbxf"] Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.116174 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gznv5\" (UniqueName: \"kubernetes.io/projected/00a8e916-fd2d-4105-a472-587f834c2690-kube-api-access-gznv5\") pod \"dnsmasq-dns-95587bc99-j5gqm\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.130600 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-gjp94"] Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.131954 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.145392 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-gjp94"] Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.233856 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.295258 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6gnv\" (UniqueName: \"kubernetes.io/projected/79436316-1c64-44cb-97a9-e2b000fdd57c-kube-api-access-l6gnv\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.295490 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-config\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.295675 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.397684 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6gnv\" (UniqueName: \"kubernetes.io/projected/79436316-1c64-44cb-97a9-e2b000fdd57c-kube-api-access-l6gnv\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.397788 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-config\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.397847 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.400118 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-config\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.402287 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.421099 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6gnv\" (UniqueName: \"kubernetes.io/projected/79436316-1c64-44cb-97a9-e2b000fdd57c-kube-api-access-l6gnv\") pod \"dnsmasq-dns-5d79f765b5-gjp94\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.446027 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-7v6jt"] Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.467361 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.527549 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-ltbxf"] Jan 23 09:19:23 crc kubenswrapper[4982]: W0123 09:19:23.541591 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad5c25eb_1576_454e_925d_cdbd0cb36566.slice/crio-4d5d20601668edfca6ba1bfe55c3f7cc5261e1098c9717715db89beb5420e856 WatchSource:0}: Error finding container 4d5d20601668edfca6ba1bfe55c3f7cc5261e1098c9717715db89beb5420e856: Status 404 returned error can't find the container with id 4d5d20601668edfca6ba1bfe55c3f7cc5261e1098c9717715db89beb5420e856 Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.772729 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-j5gqm"] Jan 23 09:19:23 crc kubenswrapper[4982]: W0123 09:19:23.778961 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00a8e916_fd2d_4105_a472_587f834c2690.slice/crio-c85c580851a61eaff849b59ddb138c3c5c47ce17463cf303dfaa6b9e658a6de0 WatchSource:0}: Error finding container c85c580851a61eaff849b59ddb138c3c5c47ce17463cf303dfaa6b9e658a6de0: Status 404 returned error can't find the container with id c85c580851a61eaff849b59ddb138c3c5c47ce17463cf303dfaa6b9e658a6de0 Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.928463 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-gjp94"] Jan 23 09:19:23 crc kubenswrapper[4982]: W0123 09:19:23.964501 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79436316_1c64_44cb_97a9_e2b000fdd57c.slice/crio-5ba5f74e9f349b2784934620212660ae55909d837b6e60ee2f3b1a266f1d9593 WatchSource:0}: Error finding container 5ba5f74e9f349b2784934620212660ae55909d837b6e60ee2f3b1a266f1d9593: Status 404 returned error can't find the container with id 5ba5f74e9f349b2784934620212660ae55909d837b6e60ee2f3b1a266f1d9593 Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.986120 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.987906 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.991652 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.991677 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.991652 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.992099 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d8csc" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.992237 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.993667 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.994225 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 09:19:23 crc kubenswrapper[4982]: I0123 09:19:23.998253 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.063091 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" event={"ID":"00a8e916-fd2d-4105-a472-587f834c2690","Type":"ContainerStarted","Data":"c85c580851a61eaff849b59ddb138c3c5c47ce17463cf303dfaa6b9e658a6de0"} Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.068333 4982 generic.go:334] "Generic (PLEG): container finished" podID="ad5c25eb-1576-454e-925d-cdbd0cb36566" containerID="67bcd70d84f34f1c29bef164913b320dbe15aa2a2924290df46b42f18f5f726a" exitCode=0 Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.068403 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" event={"ID":"ad5c25eb-1576-454e-925d-cdbd0cb36566","Type":"ContainerDied","Data":"67bcd70d84f34f1c29bef164913b320dbe15aa2a2924290df46b42f18f5f726a"} Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.068443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" event={"ID":"ad5c25eb-1576-454e-925d-cdbd0cb36566","Type":"ContainerStarted","Data":"4d5d20601668edfca6ba1bfe55c3f7cc5261e1098c9717715db89beb5420e856"} Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.078692 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" event={"ID":"79436316-1c64-44cb-97a9-e2b000fdd57c","Type":"ContainerStarted","Data":"5ba5f74e9f349b2784934620212660ae55909d837b6e60ee2f3b1a266f1d9593"} Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.083196 4982 generic.go:334] "Generic (PLEG): container finished" podID="2e972788-2448-4898-841b-ac9eb33ef5b4" containerID="781c4b5c2639a1f5310ca32816b6b91caee2c1c2b3edfed3bb2686614b929bd5" exitCode=0 Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.083242 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" event={"ID":"2e972788-2448-4898-841b-ac9eb33ef5b4","Type":"ContainerDied","Data":"781c4b5c2639a1f5310ca32816b6b91caee2c1c2b3edfed3bb2686614b929bd5"} Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.083305 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" event={"ID":"2e972788-2448-4898-841b-ac9eb33ef5b4","Type":"ContainerStarted","Data":"ec253222ecdd76e268b9e5be979e4216b81f22db6b2b1a4d32b0d2f336e68f7e"} Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108361 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108411 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108430 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ca879e6-d28a-49e6-8969-fb255ed70837-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108469 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ca879e6-d28a-49e6-8969-fb255ed70837-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108497 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108536 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108558 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrr5p\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-kube-api-access-wrr5p\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108611 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108640 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.108659 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211357 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ca879e6-d28a-49e6-8969-fb255ed70837-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211420 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211445 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211473 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211498 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrr5p\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-kube-api-access-wrr5p\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211563 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211579 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211593 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211636 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211668 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.211687 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ca879e6-d28a-49e6-8969-fb255ed70837-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.213859 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-config-data\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.213898 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.214216 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.215170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.216256 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.216706 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.216957 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99c20e4339a090324623a0ffead992da8ff761983eb00b9d427c52c490858a96/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.221651 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ca879e6-d28a-49e6-8969-fb255ed70837-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.222413 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.223016 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ca879e6-d28a-49e6-8969-fb255ed70837-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.232361 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.237228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrr5p\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-kube-api-access-wrr5p\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.302738 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.306354 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.309097 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.320556 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.322842 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.324028 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.324132 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.324455 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.324919 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pnbgj" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.325227 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.368724 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426399 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxm7z\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-kube-api-access-xxm7z\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426520 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426546 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426585 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e040b9e-9f50-402c-a948-933e03508f1f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426639 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426663 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426710 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426732 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e040b9e-9f50-402c-a948-933e03508f1f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.426782 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.433483 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.473526 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528134 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-dns-svc\") pod \"2e972788-2448-4898-841b-ac9eb33ef5b4\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528331 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8qph\" (UniqueName: \"kubernetes.io/projected/2e972788-2448-4898-841b-ac9eb33ef5b4-kube-api-access-l8qph\") pod \"2e972788-2448-4898-841b-ac9eb33ef5b4\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528397 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-config\") pod \"2e972788-2448-4898-841b-ac9eb33ef5b4\" (UID: \"2e972788-2448-4898-841b-ac9eb33ef5b4\") " Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528571 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxm7z\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-kube-api-access-xxm7z\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528599 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528663 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528683 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528702 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e040b9e-9f50-402c-a948-933e03508f1f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528763 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528777 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528826 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e040b9e-9f50-402c-a948-933e03508f1f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.528861 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.529300 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.529505 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.531356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.532281 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.532577 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e972788-2448-4898-841b-ac9eb33ef5b4-kube-api-access-l8qph" (OuterVolumeSpecName: "kube-api-access-l8qph") pod "2e972788-2448-4898-841b-ac9eb33ef5b4" (UID: "2e972788-2448-4898-841b-ac9eb33ef5b4"). InnerVolumeSpecName "kube-api-access-l8qph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.533514 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.533572 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.533578 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e040b9e-9f50-402c-a948-933e03508f1f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.533604 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a22909b2bfa6df63fa177c81dae745ab981de181b95c9d337a19f412ba2381ad/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.533974 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e040b9e-9f50-402c-a948-933e03508f1f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.534306 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.538428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.549832 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxm7z\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-kube-api-access-xxm7z\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.552161 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e972788-2448-4898-841b-ac9eb33ef5b4" (UID: "2e972788-2448-4898-841b-ac9eb33ef5b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.553107 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-config" (OuterVolumeSpecName: "config") pod "2e972788-2448-4898-841b-ac9eb33ef5b4" (UID: "2e972788-2448-4898-841b-ac9eb33ef5b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.575879 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.621510 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.630331 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sg76\" (UniqueName: \"kubernetes.io/projected/ad5c25eb-1576-454e-925d-cdbd0cb36566-kube-api-access-4sg76\") pod \"ad5c25eb-1576-454e-925d-cdbd0cb36566\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.630420 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5c25eb-1576-454e-925d-cdbd0cb36566-config\") pod \"ad5c25eb-1576-454e-925d-cdbd0cb36566\" (UID: \"ad5c25eb-1576-454e-925d-cdbd0cb36566\") " Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.630990 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8qph\" (UniqueName: \"kubernetes.io/projected/2e972788-2448-4898-841b-ac9eb33ef5b4-kube-api-access-l8qph\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.631016 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.631029 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e972788-2448-4898-841b-ac9eb33ef5b4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.634955 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5c25eb-1576-454e-925d-cdbd0cb36566-kube-api-access-4sg76" (OuterVolumeSpecName: "kube-api-access-4sg76") pod "ad5c25eb-1576-454e-925d-cdbd0cb36566" (UID: "ad5c25eb-1576-454e-925d-cdbd0cb36566"). InnerVolumeSpecName "kube-api-access-4sg76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.649535 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5c25eb-1576-454e-925d-cdbd0cb36566-config" (OuterVolumeSpecName: "config") pod "ad5c25eb-1576-454e-925d-cdbd0cb36566" (UID: "ad5c25eb-1576-454e-925d-cdbd0cb36566"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.666156 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.755110 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sg76\" (UniqueName: \"kubernetes.io/projected/ad5c25eb-1576-454e-925d-cdbd0cb36566-kube-api-access-4sg76\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:24 crc kubenswrapper[4982]: I0123 09:19:24.755279 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5c25eb-1576-454e-925d-cdbd0cb36566-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.092696 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" event={"ID":"2e972788-2448-4898-841b-ac9eb33ef5b4","Type":"ContainerDied","Data":"ec253222ecdd76e268b9e5be979e4216b81f22db6b2b1a4d32b0d2f336e68f7e"} Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.092751 4982 scope.go:117] "RemoveContainer" containerID="781c4b5c2639a1f5310ca32816b6b91caee2c1c2b3edfed3bb2686614b929bd5" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.092861 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbd59dc5-7v6jt" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.097388 4982 generic.go:334] "Generic (PLEG): container finished" podID="00a8e916-fd2d-4105-a472-587f834c2690" containerID="985ad6f856704dbf41b2da5e69f09f2bf2358bb3886d956ebb3a3acb184cc0ae" exitCode=0 Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.097440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" event={"ID":"00a8e916-fd2d-4105-a472-587f834c2690","Type":"ContainerDied","Data":"985ad6f856704dbf41b2da5e69f09f2bf2358bb3886d956ebb3a3acb184cc0ae"} Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.103326 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.103315 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5986db9b4f-ltbxf" event={"ID":"ad5c25eb-1576-454e-925d-cdbd0cb36566","Type":"ContainerDied","Data":"4d5d20601668edfca6ba1bfe55c3f7cc5261e1098c9717715db89beb5420e856"} Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.113402 4982 generic.go:334] "Generic (PLEG): container finished" podID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerID="568f4aa4d851c305c31ccf1539e85fd1436d5a749d9b038e07327d71d50ad088" exitCode=0 Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.113447 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" event={"ID":"79436316-1c64-44cb-97a9-e2b000fdd57c","Type":"ContainerDied","Data":"568f4aa4d851c305c31ccf1539e85fd1436d5a749d9b038e07327d71d50ad088"} Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.122254 4982 scope.go:117] "RemoveContainer" containerID="67bcd70d84f34f1c29bef164913b320dbe15aa2a2924290df46b42f18f5f726a" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.131019 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.191787 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.302249 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-7v6jt"] Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.321418 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbd59dc5-7v6jt"] Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.344538 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-ltbxf"] Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.356965 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5986db9b4f-ltbxf"] Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.369800 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 09:19:25 crc kubenswrapper[4982]: E0123 09:19:25.370376 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e972788-2448-4898-841b-ac9eb33ef5b4" containerName="init" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.370477 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e972788-2448-4898-841b-ac9eb33ef5b4" containerName="init" Jan 23 09:19:25 crc kubenswrapper[4982]: E0123 09:19:25.370604 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5c25eb-1576-454e-925d-cdbd0cb36566" containerName="init" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.370899 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5c25eb-1576-454e-925d-cdbd0cb36566" containerName="init" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.371167 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5c25eb-1576-454e-925d-cdbd0cb36566" containerName="init" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.371249 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e972788-2448-4898-841b-ac9eb33ef5b4" containerName="init" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.372199 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.377126 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.377615 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xss8h" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.384528 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.385560 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.388691 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.390116 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.575946 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e221cae-f304-4f00-be53-5a72edbf3bd7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.576044 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v8nw\" (UniqueName: \"kubernetes.io/projected/1e221cae-f304-4f00-be53-5a72edbf3bd7-kube-api-access-9v8nw\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.576086 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-config-data-default\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.576108 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.576136 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.576158 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e221cae-f304-4f00-be53-5a72edbf3bd7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.576186 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-kolla-config\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.576206 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1e221cae-f304-4f00-be53-5a72edbf3bd7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677129 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e221cae-f304-4f00-be53-5a72edbf3bd7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677189 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v8nw\" (UniqueName: \"kubernetes.io/projected/1e221cae-f304-4f00-be53-5a72edbf3bd7-kube-api-access-9v8nw\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-config-data-default\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677239 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677267 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e221cae-f304-4f00-be53-5a72edbf3bd7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677318 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-kolla-config\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.677336 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1e221cae-f304-4f00-be53-5a72edbf3bd7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.678073 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1e221cae-f304-4f00-be53-5a72edbf3bd7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.679092 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-kolla-config\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.680077 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-config-data-default\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.680521 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e221cae-f304-4f00-be53-5a72edbf3bd7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.680594 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.680638 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8e10282b85e1d19c6e700268239e09e150aa2059db8f1ba4a131ad399fadbac6/globalmount\"" pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.682347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e221cae-f304-4f00-be53-5a72edbf3bd7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.686912 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e221cae-f304-4f00-be53-5a72edbf3bd7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.702333 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v8nw\" (UniqueName: \"kubernetes.io/projected/1e221cae-f304-4f00-be53-5a72edbf3bd7-kube-api-access-9v8nw\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.722451 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7c8a7444-0f64-4438-bfad-9fd78b3e3c98\") pod \"openstack-galera-0\" (UID: \"1e221cae-f304-4f00-be53-5a72edbf3bd7\") " pod="openstack/openstack-galera-0" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.845898 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e972788-2448-4898-841b-ac9eb33ef5b4" path="/var/lib/kubelet/pods/2e972788-2448-4898-841b-ac9eb33ef5b4/volumes" Jan 23 09:19:25 crc kubenswrapper[4982]: I0123 09:19:25.847180 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5c25eb-1576-454e-925d-cdbd0cb36566" path="/var/lib/kubelet/pods/ad5c25eb-1576-454e-925d-cdbd0cb36566/volumes" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.012082 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.127524 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" event={"ID":"79436316-1c64-44cb-97a9-e2b000fdd57c","Type":"ContainerStarted","Data":"e1f1d453ec4370ea0044563942c654509bda46ef93dcc7f6fc72a2e08b9a1107"} Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.127943 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.150781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" event={"ID":"00a8e916-fd2d-4105-a472-587f834c2690","Type":"ContainerStarted","Data":"e7594ffe2cb72de5b8f85519373538ed4576d953551018f55e4afedae3cdfdd6"} Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.151047 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" podStartSLOduration=3.151028778 podStartE2EDuration="3.151028778s" podCreationTimestamp="2026-01-23 09:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:26.149136277 +0000 UTC m=+5040.629499673" watchObservedRunningTime="2026-01-23 09:19:26.151028778 +0000 UTC m=+5040.631392164" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.151743 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.153120 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ca879e6-d28a-49e6-8969-fb255ed70837","Type":"ContainerStarted","Data":"6d711b99424d14f5d904dca61e6419e38cd74b36eab3338e576dd19cbcb24e67"} Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.154808 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e040b9e-9f50-402c-a948-933e03508f1f","Type":"ContainerStarted","Data":"d10195beeccc89c73525fd6fae43c49689f6219d35e1614b2256d719950874f0"} Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.176579 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" podStartSLOduration=4.176561394 podStartE2EDuration="4.176561394s" podCreationTimestamp="2026-01-23 09:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:26.173019779 +0000 UTC m=+5040.653383165" watchObservedRunningTime="2026-01-23 09:19:26.176561394 +0000 UTC m=+5040.656924780" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.490818 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 09:19:26 crc kubenswrapper[4982]: W0123 09:19:26.494255 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e221cae_f304_4f00_be53_5a72edbf3bd7.slice/crio-6d23321ef58c4a4404dba8a9b3c4d6ae910653bf35933b6ab764f922a4d7cb43 WatchSource:0}: Error finding container 6d23321ef58c4a4404dba8a9b3c4d6ae910653bf35933b6ab764f922a4d7cb43: Status 404 returned error can't find the container with id 6d23321ef58c4a4404dba8a9b3c4d6ae910653bf35933b6ab764f922a4d7cb43 Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.758906 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.762859 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.767595 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.767736 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wgg68" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.767876 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.768042 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.777107 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.923950 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/945dc347-e5e3-4533-ab71-b8b2d051fdeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.924002 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4s8\" (UniqueName: \"kubernetes.io/projected/945dc347-e5e3-4533-ab71-b8b2d051fdeb-kube-api-access-xf4s8\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.924043 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.924219 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/945dc347-e5e3-4533-ab71-b8b2d051fdeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.924442 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.924501 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.924745 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:26 crc kubenswrapper[4982]: I0123 09:19:26.924851 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945dc347-e5e3-4533-ab71-b8b2d051fdeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.026855 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027444 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027652 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027694 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945dc347-e5e3-4533-ab71-b8b2d051fdeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027719 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/945dc347-e5e3-4533-ab71-b8b2d051fdeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027733 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027742 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4s8\" (UniqueName: \"kubernetes.io/projected/945dc347-e5e3-4533-ab71-b8b2d051fdeb-kube-api-access-xf4s8\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027793 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.027821 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/945dc347-e5e3-4533-ab71-b8b2d051fdeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.028092 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/945dc347-e5e3-4533-ab71-b8b2d051fdeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.028633 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.029606 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/945dc347-e5e3-4533-ab71-b8b2d051fdeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.030461 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.030490 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9e3268d993abc0f07a6328ec91e84c2909b84c0f746187bf12c636055d6d32f7/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.035260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/945dc347-e5e3-4533-ab71-b8b2d051fdeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.035303 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/945dc347-e5e3-4533-ab71-b8b2d051fdeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.046789 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4s8\" (UniqueName: \"kubernetes.io/projected/945dc347-e5e3-4533-ab71-b8b2d051fdeb-kube-api-access-xf4s8\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.073386 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b00cac1b-ad94-4eae-a716-93dd760a23eb\") pod \"openstack-cell1-galera-0\" (UID: \"945dc347-e5e3-4533-ab71-b8b2d051fdeb\") " pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.098241 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.178440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ca879e6-d28a-49e6-8969-fb255ed70837","Type":"ContainerStarted","Data":"14dbb11df292eba166d7ce414e7680e147f1864b57d0dfb9ad89afc44f75589c"} Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.185869 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e040b9e-9f50-402c-a948-933e03508f1f","Type":"ContainerStarted","Data":"a7f81449bc3ef19c5850c901e11a0e3c6d9fbed12fb3d1b87e2e9e83c5774b83"} Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.188414 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1e221cae-f304-4f00-be53-5a72edbf3bd7","Type":"ContainerStarted","Data":"6d2f36a03aa5b4bf2e7f2f65e1d06dcfa5c7fede6efc55a587dcadfebd4c12c7"} Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.188550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1e221cae-f304-4f00-be53-5a72edbf3bd7","Type":"ContainerStarted","Data":"6d23321ef58c4a4404dba8a9b3c4d6ae910653bf35933b6ab764f922a4d7cb43"} Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.249707 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.251270 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.253338 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.253549 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.254402 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-p8nbz" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.261296 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.438605 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1a7f5c-0b54-4fd8-b231-134061dc72be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.438816 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b1a7f5c-0b54-4fd8-b231-134061dc72be-config-data\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.438868 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1a7f5c-0b54-4fd8-b231-134061dc72be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.438951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b1a7f5c-0b54-4fd8-b231-134061dc72be-kolla-config\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.439125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9j8\" (UniqueName: \"kubernetes.io/projected/2b1a7f5c-0b54-4fd8-b231-134061dc72be-kube-api-access-wj9j8\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.807376 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b1a7f5c-0b54-4fd8-b231-134061dc72be-config-data\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.807448 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1a7f5c-0b54-4fd8-b231-134061dc72be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.807510 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b1a7f5c-0b54-4fd8-b231-134061dc72be-kolla-config\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.807575 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9j8\" (UniqueName: \"kubernetes.io/projected/2b1a7f5c-0b54-4fd8-b231-134061dc72be-kube-api-access-wj9j8\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.807650 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1a7f5c-0b54-4fd8-b231-134061dc72be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.808313 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b1a7f5c-0b54-4fd8-b231-134061dc72be-config-data\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.809021 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2b1a7f5c-0b54-4fd8-b231-134061dc72be-kolla-config\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.811228 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.812645 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b1a7f5c-0b54-4fd8-b231-134061dc72be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.822191 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2b1a7f5c-0b54-4fd8-b231-134061dc72be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:27 crc kubenswrapper[4982]: I0123 09:19:27.836210 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9j8\" (UniqueName: \"kubernetes.io/projected/2b1a7f5c-0b54-4fd8-b231-134061dc72be-kube-api-access-wj9j8\") pod \"memcached-0\" (UID: \"2b1a7f5c-0b54-4fd8-b231-134061dc72be\") " pod="openstack/memcached-0" Jan 23 09:19:28 crc kubenswrapper[4982]: I0123 09:19:28.052238 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-p8nbz" Jan 23 09:19:28 crc kubenswrapper[4982]: I0123 09:19:28.059757 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 09:19:28 crc kubenswrapper[4982]: I0123 09:19:28.089166 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 09:19:28 crc kubenswrapper[4982]: I0123 09:19:28.202699 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"945dc347-e5e3-4533-ab71-b8b2d051fdeb","Type":"ContainerStarted","Data":"6dbc262067bcfd5288041c89f96b3727cf2671b896275340223ef2c7d65371f3"} Jan 23 09:19:28 crc kubenswrapper[4982]: I0123 09:19:28.627916 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 09:19:28 crc kubenswrapper[4982]: W0123 09:19:28.634847 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b1a7f5c_0b54_4fd8_b231_134061dc72be.slice/crio-ca064872c390f667d65202f4cdf14198153fa5a66ed24c198d0202487f3c42b2 WatchSource:0}: Error finding container ca064872c390f667d65202f4cdf14198153fa5a66ed24c198d0202487f3c42b2: Status 404 returned error can't find the container with id ca064872c390f667d65202f4cdf14198153fa5a66ed24c198d0202487f3c42b2 Jan 23 09:19:29 crc kubenswrapper[4982]: I0123 09:19:29.214031 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2b1a7f5c-0b54-4fd8-b231-134061dc72be","Type":"ContainerStarted","Data":"347d21957907e8b35f56229af3d2fbe5c24f1deca1488baea8fea5742ac78ac9"} Jan 23 09:19:29 crc kubenswrapper[4982]: I0123 09:19:29.215361 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 09:19:29 crc kubenswrapper[4982]: I0123 09:19:29.215803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2b1a7f5c-0b54-4fd8-b231-134061dc72be","Type":"ContainerStarted","Data":"ca064872c390f667d65202f4cdf14198153fa5a66ed24c198d0202487f3c42b2"} Jan 23 09:19:29 crc kubenswrapper[4982]: I0123 09:19:29.216963 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"945dc347-e5e3-4533-ab71-b8b2d051fdeb","Type":"ContainerStarted","Data":"d08ffe103b9c10dd917b60fcf2819e93b736fc727f8b874dcc98808d72c2f264"} Jan 23 09:19:29 crc kubenswrapper[4982]: I0123 09:19:29.235602 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.2355770919999998 podStartE2EDuration="2.235577092s" podCreationTimestamp="2026-01-23 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:29.234442472 +0000 UTC m=+5043.714805878" watchObservedRunningTime="2026-01-23 09:19:29.235577092 +0000 UTC m=+5043.715940508" Jan 23 09:19:31 crc kubenswrapper[4982]: I0123 09:19:31.238490 4982 generic.go:334] "Generic (PLEG): container finished" podID="1e221cae-f304-4f00-be53-5a72edbf3bd7" containerID="6d2f36a03aa5b4bf2e7f2f65e1d06dcfa5c7fede6efc55a587dcadfebd4c12c7" exitCode=0 Jan 23 09:19:31 crc kubenswrapper[4982]: I0123 09:19:31.238581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1e221cae-f304-4f00-be53-5a72edbf3bd7","Type":"ContainerDied","Data":"6d2f36a03aa5b4bf2e7f2f65e1d06dcfa5c7fede6efc55a587dcadfebd4c12c7"} Jan 23 09:19:32 crc kubenswrapper[4982]: I0123 09:19:32.252278 4982 generic.go:334] "Generic (PLEG): container finished" podID="945dc347-e5e3-4533-ab71-b8b2d051fdeb" containerID="d08ffe103b9c10dd917b60fcf2819e93b736fc727f8b874dcc98808d72c2f264" exitCode=0 Jan 23 09:19:32 crc kubenswrapper[4982]: I0123 09:19:32.252365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"945dc347-e5e3-4533-ab71-b8b2d051fdeb","Type":"ContainerDied","Data":"d08ffe103b9c10dd917b60fcf2819e93b736fc727f8b874dcc98808d72c2f264"} Jan 23 09:19:32 crc kubenswrapper[4982]: I0123 09:19:32.255818 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1e221cae-f304-4f00-be53-5a72edbf3bd7","Type":"ContainerStarted","Data":"be45f80de341ba1ef15f94763349785c0420e5e995e372035ab4085e009c49a4"} Jan 23 09:19:32 crc kubenswrapper[4982]: I0123 09:19:32.311317 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.311291211 podStartE2EDuration="8.311291211s" podCreationTimestamp="2026-01-23 09:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:32.304243322 +0000 UTC m=+5046.784606718" watchObservedRunningTime="2026-01-23 09:19:32.311291211 +0000 UTC m=+5046.791654607" Jan 23 09:19:33 crc kubenswrapper[4982]: I0123 09:19:33.061689 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 09:19:33 crc kubenswrapper[4982]: I0123 09:19:33.235862 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:33 crc kubenswrapper[4982]: I0123 09:19:33.292205 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"945dc347-e5e3-4533-ab71-b8b2d051fdeb","Type":"ContainerStarted","Data":"19f4c187ab42686fe56388dbe8e8fefec2386f60e09368e59b91fc9eb3784053"} Jan 23 09:19:33 crc kubenswrapper[4982]: I0123 09:19:33.316334 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.316312311 podStartE2EDuration="8.316312311s" podCreationTimestamp="2026-01-23 09:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:33.31032132 +0000 UTC m=+5047.790684726" watchObservedRunningTime="2026-01-23 09:19:33.316312311 +0000 UTC m=+5047.796675707" Jan 23 09:19:33 crc kubenswrapper[4982]: I0123 09:19:33.470397 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:19:33 crc kubenswrapper[4982]: I0123 09:19:33.518567 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-j5gqm"] Jan 23 09:19:33 crc kubenswrapper[4982]: I0123 09:19:33.518829 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" podUID="00a8e916-fd2d-4105-a472-587f834c2690" containerName="dnsmasq-dns" containerID="cri-o://e7594ffe2cb72de5b8f85519373538ed4576d953551018f55e4afedae3cdfdd6" gracePeriod=10 Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.301112 4982 generic.go:334] "Generic (PLEG): container finished" podID="00a8e916-fd2d-4105-a472-587f834c2690" containerID="e7594ffe2cb72de5b8f85519373538ed4576d953551018f55e4afedae3cdfdd6" exitCode=0 Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.301199 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" event={"ID":"00a8e916-fd2d-4105-a472-587f834c2690","Type":"ContainerDied","Data":"e7594ffe2cb72de5b8f85519373538ed4576d953551018f55e4afedae3cdfdd6"} Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.577545 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.752862 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gznv5\" (UniqueName: \"kubernetes.io/projected/00a8e916-fd2d-4105-a472-587f834c2690-kube-api-access-gznv5\") pod \"00a8e916-fd2d-4105-a472-587f834c2690\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.753255 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-dns-svc\") pod \"00a8e916-fd2d-4105-a472-587f834c2690\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.753397 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-config\") pod \"00a8e916-fd2d-4105-a472-587f834c2690\" (UID: \"00a8e916-fd2d-4105-a472-587f834c2690\") " Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.764700 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a8e916-fd2d-4105-a472-587f834c2690-kube-api-access-gznv5" (OuterVolumeSpecName: "kube-api-access-gznv5") pod "00a8e916-fd2d-4105-a472-587f834c2690" (UID: "00a8e916-fd2d-4105-a472-587f834c2690"). InnerVolumeSpecName "kube-api-access-gznv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.796043 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00a8e916-fd2d-4105-a472-587f834c2690" (UID: "00a8e916-fd2d-4105-a472-587f834c2690"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.797568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-config" (OuterVolumeSpecName: "config") pod "00a8e916-fd2d-4105-a472-587f834c2690" (UID: "00a8e916-fd2d-4105-a472-587f834c2690"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.856126 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.856185 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gznv5\" (UniqueName: \"kubernetes.io/projected/00a8e916-fd2d-4105-a472-587f834c2690-kube-api-access-gznv5\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:34 crc kubenswrapper[4982]: I0123 09:19:34.856203 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00a8e916-fd2d-4105-a472-587f834c2690-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:35 crc kubenswrapper[4982]: I0123 09:19:35.313596 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" event={"ID":"00a8e916-fd2d-4105-a472-587f834c2690","Type":"ContainerDied","Data":"c85c580851a61eaff849b59ddb138c3c5c47ce17463cf303dfaa6b9e658a6de0"} Jan 23 09:19:35 crc kubenswrapper[4982]: I0123 09:19:35.313679 4982 scope.go:117] "RemoveContainer" containerID="e7594ffe2cb72de5b8f85519373538ed4576d953551018f55e4afedae3cdfdd6" Jan 23 09:19:35 crc kubenswrapper[4982]: I0123 09:19:35.313694 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-j5gqm" Jan 23 09:19:35 crc kubenswrapper[4982]: I0123 09:19:35.343485 4982 scope.go:117] "RemoveContainer" containerID="985ad6f856704dbf41b2da5e69f09f2bf2358bb3886d956ebb3a3acb184cc0ae" Jan 23 09:19:35 crc kubenswrapper[4982]: I0123 09:19:35.353370 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-j5gqm"] Jan 23 09:19:35 crc kubenswrapper[4982]: I0123 09:19:35.368758 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-j5gqm"] Jan 23 09:19:35 crc kubenswrapper[4982]: I0123 09:19:35.837184 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00a8e916-fd2d-4105-a472-587f834c2690" path="/var/lib/kubelet/pods/00a8e916-fd2d-4105-a472-587f834c2690/volumes" Jan 23 09:19:36 crc kubenswrapper[4982]: I0123 09:19:36.013226 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 09:19:36 crc kubenswrapper[4982]: I0123 09:19:36.016888 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 09:19:36 crc kubenswrapper[4982]: I0123 09:19:36.144065 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 09:19:36 crc kubenswrapper[4982]: I0123 09:19:36.393699 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 09:19:37 crc kubenswrapper[4982]: I0123 09:19:37.099174 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:37 crc kubenswrapper[4982]: I0123 09:19:37.099464 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:37 crc kubenswrapper[4982]: I0123 09:19:37.166738 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:37 crc kubenswrapper[4982]: I0123 09:19:37.398034 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 09:19:41 crc kubenswrapper[4982]: I0123 09:19:41.436199 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:19:41 crc kubenswrapper[4982]: I0123 09:19:41.436836 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:19:41 crc kubenswrapper[4982]: I0123 09:19:41.436883 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:19:41 crc kubenswrapper[4982]: I0123 09:19:41.437532 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:19:41 crc kubenswrapper[4982]: I0123 09:19:41.437577 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" gracePeriod=600 Jan 23 09:19:41 crc kubenswrapper[4982]: E0123 09:19:41.560338 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:19:42 crc kubenswrapper[4982]: I0123 09:19:42.373405 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" exitCode=0 Jan 23 09:19:42 crc kubenswrapper[4982]: I0123 09:19:42.373456 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf"} Jan 23 09:19:42 crc kubenswrapper[4982]: I0123 09:19:42.373495 4982 scope.go:117] "RemoveContainer" containerID="3382718d5d51a4f56a70eff82f0c152683ad214583ff18c7e8933d4528a378de" Jan 23 09:19:42 crc kubenswrapper[4982]: I0123 09:19:42.373962 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:19:42 crc kubenswrapper[4982]: E0123 09:19:42.374373 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.321483 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mhzd9"] Jan 23 09:19:44 crc kubenswrapper[4982]: E0123 09:19:44.322218 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a8e916-fd2d-4105-a472-587f834c2690" containerName="dnsmasq-dns" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.322235 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a8e916-fd2d-4105-a472-587f834c2690" containerName="dnsmasq-dns" Jan 23 09:19:44 crc kubenswrapper[4982]: E0123 09:19:44.322265 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a8e916-fd2d-4105-a472-587f834c2690" containerName="init" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.322273 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a8e916-fd2d-4105-a472-587f834c2690" containerName="init" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.322485 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="00a8e916-fd2d-4105-a472-587f834c2690" containerName="dnsmasq-dns" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.323237 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.325276 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.332291 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mhzd9"] Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.422380 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdde6554-29b1-4acb-b799-b935bf021f9e-operator-scripts\") pod \"root-account-create-update-mhzd9\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.422689 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l98js\" (UniqueName: \"kubernetes.io/projected/bdde6554-29b1-4acb-b799-b935bf021f9e-kube-api-access-l98js\") pod \"root-account-create-update-mhzd9\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.524831 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdde6554-29b1-4acb-b799-b935bf021f9e-operator-scripts\") pod \"root-account-create-update-mhzd9\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.524904 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l98js\" (UniqueName: \"kubernetes.io/projected/bdde6554-29b1-4acb-b799-b935bf021f9e-kube-api-access-l98js\") pod \"root-account-create-update-mhzd9\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.526363 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdde6554-29b1-4acb-b799-b935bf021f9e-operator-scripts\") pod \"root-account-create-update-mhzd9\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.545144 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l98js\" (UniqueName: \"kubernetes.io/projected/bdde6554-29b1-4acb-b799-b935bf021f9e-kube-api-access-l98js\") pod \"root-account-create-update-mhzd9\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:44 crc kubenswrapper[4982]: I0123 09:19:44.640984 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:45 crc kubenswrapper[4982]: I0123 09:19:45.105543 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mhzd9"] Jan 23 09:19:45 crc kubenswrapper[4982]: I0123 09:19:45.406337 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mhzd9" event={"ID":"bdde6554-29b1-4acb-b799-b935bf021f9e","Type":"ContainerStarted","Data":"1ab24f08e7c7fda23744b4887c6073ecc6cac7d472943471fcf10074fb2d0d32"} Jan 23 09:19:45 crc kubenswrapper[4982]: I0123 09:19:45.407584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mhzd9" event={"ID":"bdde6554-29b1-4acb-b799-b935bf021f9e","Type":"ContainerStarted","Data":"06ff62d08ad76dfffc3f0bd5c6ded80e6b0b4e2eaad51fb1d2bfc5521ebc822b"} Jan 23 09:19:45 crc kubenswrapper[4982]: I0123 09:19:45.427046 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mhzd9" podStartSLOduration=1.427029546 podStartE2EDuration="1.427029546s" podCreationTimestamp="2026-01-23 09:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:45.423917473 +0000 UTC m=+5059.904280869" watchObservedRunningTime="2026-01-23 09:19:45.427029546 +0000 UTC m=+5059.907392932" Jan 23 09:19:46 crc kubenswrapper[4982]: I0123 09:19:46.415853 4982 generic.go:334] "Generic (PLEG): container finished" podID="bdde6554-29b1-4acb-b799-b935bf021f9e" containerID="1ab24f08e7c7fda23744b4887c6073ecc6cac7d472943471fcf10074fb2d0d32" exitCode=0 Jan 23 09:19:46 crc kubenswrapper[4982]: I0123 09:19:46.415931 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mhzd9" event={"ID":"bdde6554-29b1-4acb-b799-b935bf021f9e","Type":"ContainerDied","Data":"1ab24f08e7c7fda23744b4887c6073ecc6cac7d472943471fcf10074fb2d0d32"} Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:47.718722 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:47.780599 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l98js\" (UniqueName: \"kubernetes.io/projected/bdde6554-29b1-4acb-b799-b935bf021f9e-kube-api-access-l98js\") pod \"bdde6554-29b1-4acb-b799-b935bf021f9e\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:47.781658 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdde6554-29b1-4acb-b799-b935bf021f9e-operator-scripts\") pod \"bdde6554-29b1-4acb-b799-b935bf021f9e\" (UID: \"bdde6554-29b1-4acb-b799-b935bf021f9e\") " Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:47.782530 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdde6554-29b1-4acb-b799-b935bf021f9e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdde6554-29b1-4acb-b799-b935bf021f9e" (UID: "bdde6554-29b1-4acb-b799-b935bf021f9e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:47.786060 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdde6554-29b1-4acb-b799-b935bf021f9e-kube-api-access-l98js" (OuterVolumeSpecName: "kube-api-access-l98js") pod "bdde6554-29b1-4acb-b799-b935bf021f9e" (UID: "bdde6554-29b1-4acb-b799-b935bf021f9e"). InnerVolumeSpecName "kube-api-access-l98js". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:47.883861 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l98js\" (UniqueName: \"kubernetes.io/projected/bdde6554-29b1-4acb-b799-b935bf021f9e-kube-api-access-l98js\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:47.884280 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdde6554-29b1-4acb-b799-b935bf021f9e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:48.435150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mhzd9" event={"ID":"bdde6554-29b1-4acb-b799-b935bf021f9e","Type":"ContainerDied","Data":"06ff62d08ad76dfffc3f0bd5c6ded80e6b0b4e2eaad51fb1d2bfc5521ebc822b"} Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:48.435189 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mhzd9" Jan 23 09:19:48 crc kubenswrapper[4982]: I0123 09:19:48.435198 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ff62d08ad76dfffc3f0bd5c6ded80e6b0b4e2eaad51fb1d2bfc5521ebc822b" Jan 23 09:19:50 crc kubenswrapper[4982]: I0123 09:19:50.749420 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mhzd9"] Jan 23 09:19:50 crc kubenswrapper[4982]: I0123 09:19:50.757035 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mhzd9"] Jan 23 09:19:51 crc kubenswrapper[4982]: I0123 09:19:51.839227 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdde6554-29b1-4acb-b799-b935bf021f9e" path="/var/lib/kubelet/pods/bdde6554-29b1-4acb-b799-b935bf021f9e/volumes" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.756779 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-v52d8"] Jan 23 09:19:55 crc kubenswrapper[4982]: E0123 09:19:55.757515 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdde6554-29b1-4acb-b799-b935bf021f9e" containerName="mariadb-account-create-update" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.757532 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdde6554-29b1-4acb-b799-b935bf021f9e" containerName="mariadb-account-create-update" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.757771 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdde6554-29b1-4acb-b799-b935bf021f9e" containerName="mariadb-account-create-update" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.758429 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.760392 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.764817 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v52d8"] Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.814885 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9243b23-c93c-4495-9657-84cf3e077475-operator-scripts\") pod \"root-account-create-update-v52d8\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.815003 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rv8\" (UniqueName: \"kubernetes.io/projected/a9243b23-c93c-4495-9657-84cf3e077475-kube-api-access-27rv8\") pod \"root-account-create-update-v52d8\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.916988 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9243b23-c93c-4495-9657-84cf3e077475-operator-scripts\") pod \"root-account-create-update-v52d8\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.917210 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27rv8\" (UniqueName: \"kubernetes.io/projected/a9243b23-c93c-4495-9657-84cf3e077475-kube-api-access-27rv8\") pod \"root-account-create-update-v52d8\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.918736 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9243b23-c93c-4495-9657-84cf3e077475-operator-scripts\") pod \"root-account-create-update-v52d8\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:55 crc kubenswrapper[4982]: I0123 09:19:55.938608 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27rv8\" (UniqueName: \"kubernetes.io/projected/a9243b23-c93c-4495-9657-84cf3e077475-kube-api-access-27rv8\") pod \"root-account-create-update-v52d8\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:56 crc kubenswrapper[4982]: I0123 09:19:56.078637 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:56 crc kubenswrapper[4982]: I0123 09:19:56.535386 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-v52d8"] Jan 23 09:19:56 crc kubenswrapper[4982]: I0123 09:19:56.827016 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:19:56 crc kubenswrapper[4982]: E0123 09:19:56.827417 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:19:57 crc kubenswrapper[4982]: I0123 09:19:57.504688 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9243b23-c93c-4495-9657-84cf3e077475" containerID="8df22eb4ec68da08674d281fe27918ae98b130fd34db99af66509a520fba457d" exitCode=0 Jan 23 09:19:57 crc kubenswrapper[4982]: I0123 09:19:57.504827 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v52d8" event={"ID":"a9243b23-c93c-4495-9657-84cf3e077475","Type":"ContainerDied","Data":"8df22eb4ec68da08674d281fe27918ae98b130fd34db99af66509a520fba457d"} Jan 23 09:19:57 crc kubenswrapper[4982]: I0123 09:19:57.504981 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v52d8" event={"ID":"a9243b23-c93c-4495-9657-84cf3e077475","Type":"ContainerStarted","Data":"157251d39f7539060ab588c2e24bc4f1809a09bda1677f66b62ceb60826ecf75"} Jan 23 09:19:58 crc kubenswrapper[4982]: I0123 09:19:58.843441 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:58 crc kubenswrapper[4982]: I0123 09:19:58.870323 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9243b23-c93c-4495-9657-84cf3e077475-operator-scripts\") pod \"a9243b23-c93c-4495-9657-84cf3e077475\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " Jan 23 09:19:58 crc kubenswrapper[4982]: I0123 09:19:58.870485 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27rv8\" (UniqueName: \"kubernetes.io/projected/a9243b23-c93c-4495-9657-84cf3e077475-kube-api-access-27rv8\") pod \"a9243b23-c93c-4495-9657-84cf3e077475\" (UID: \"a9243b23-c93c-4495-9657-84cf3e077475\") " Jan 23 09:19:58 crc kubenswrapper[4982]: I0123 09:19:58.871315 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9243b23-c93c-4495-9657-84cf3e077475-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9243b23-c93c-4495-9657-84cf3e077475" (UID: "a9243b23-c93c-4495-9657-84cf3e077475"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:19:58 crc kubenswrapper[4982]: I0123 09:19:58.877035 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9243b23-c93c-4495-9657-84cf3e077475-kube-api-access-27rv8" (OuterVolumeSpecName: "kube-api-access-27rv8") pod "a9243b23-c93c-4495-9657-84cf3e077475" (UID: "a9243b23-c93c-4495-9657-84cf3e077475"). InnerVolumeSpecName "kube-api-access-27rv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:19:58 crc kubenswrapper[4982]: I0123 09:19:58.972606 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27rv8\" (UniqueName: \"kubernetes.io/projected/a9243b23-c93c-4495-9657-84cf3e077475-kube-api-access-27rv8\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:58 crc kubenswrapper[4982]: I0123 09:19:58.972661 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9243b23-c93c-4495-9657-84cf3e077475-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:59 crc kubenswrapper[4982]: I0123 09:19:59.522930 4982 generic.go:334] "Generic (PLEG): container finished" podID="4e040b9e-9f50-402c-a948-933e03508f1f" containerID="a7f81449bc3ef19c5850c901e11a0e3c6d9fbed12fb3d1b87e2e9e83c5774b83" exitCode=0 Jan 23 09:19:59 crc kubenswrapper[4982]: I0123 09:19:59.522999 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e040b9e-9f50-402c-a948-933e03508f1f","Type":"ContainerDied","Data":"a7f81449bc3ef19c5850c901e11a0e3c6d9fbed12fb3d1b87e2e9e83c5774b83"} Jan 23 09:19:59 crc kubenswrapper[4982]: I0123 09:19:59.525751 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-v52d8" Jan 23 09:19:59 crc kubenswrapper[4982]: I0123 09:19:59.525750 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-v52d8" event={"ID":"a9243b23-c93c-4495-9657-84cf3e077475","Type":"ContainerDied","Data":"157251d39f7539060ab588c2e24bc4f1809a09bda1677f66b62ceb60826ecf75"} Jan 23 09:19:59 crc kubenswrapper[4982]: I0123 09:19:59.526997 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="157251d39f7539060ab588c2e24bc4f1809a09bda1677f66b62ceb60826ecf75" Jan 23 09:19:59 crc kubenswrapper[4982]: I0123 09:19:59.527937 4982 generic.go:334] "Generic (PLEG): container finished" podID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerID="14dbb11df292eba166d7ce414e7680e147f1864b57d0dfb9ad89afc44f75589c" exitCode=0 Jan 23 09:19:59 crc kubenswrapper[4982]: I0123 09:19:59.527984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ca879e6-d28a-49e6-8969-fb255ed70837","Type":"ContainerDied","Data":"14dbb11df292eba166d7ce414e7680e147f1864b57d0dfb9ad89afc44f75589c"} Jan 23 09:20:00 crc kubenswrapper[4982]: I0123 09:20:00.537319 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ca879e6-d28a-49e6-8969-fb255ed70837","Type":"ContainerStarted","Data":"f56d592e0fde14ff17e334a7f17a1dfd89cd99dea7c59e16681944597c8f94a3"} Jan 23 09:20:00 crc kubenswrapper[4982]: I0123 09:20:00.537545 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 09:20:00 crc kubenswrapper[4982]: I0123 09:20:00.540739 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e040b9e-9f50-402c-a948-933e03508f1f","Type":"ContainerStarted","Data":"7c3af71254f8884eb889b0638250a0d255552f5ffa88cc688a3813858aa195c0"} Jan 23 09:20:00 crc kubenswrapper[4982]: I0123 09:20:00.540976 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:00 crc kubenswrapper[4982]: I0123 09:20:00.561661 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.561619028 podStartE2EDuration="38.561619028s" podCreationTimestamp="2026-01-23 09:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:20:00.557827216 +0000 UTC m=+5075.038190612" watchObservedRunningTime="2026-01-23 09:20:00.561619028 +0000 UTC m=+5075.041982414" Jan 23 09:20:00 crc kubenswrapper[4982]: I0123 09:20:00.577866 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.577851144 podStartE2EDuration="37.577851144s" podCreationTimestamp="2026-01-23 09:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:20:00.577462323 +0000 UTC m=+5075.057825709" watchObservedRunningTime="2026-01-23 09:20:00.577851144 +0000 UTC m=+5075.058214530" Jan 23 09:20:07 crc kubenswrapper[4982]: I0123 09:20:07.827610 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:20:07 crc kubenswrapper[4982]: E0123 09:20:07.828398 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:20:14 crc kubenswrapper[4982]: I0123 09:20:14.627031 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 09:20:14 crc kubenswrapper[4982]: I0123 09:20:14.670035 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:18 crc kubenswrapper[4982]: I0123 09:20:18.827848 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:20:18 crc kubenswrapper[4982]: E0123 09:20:18.828417 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.620681 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-4284v"] Jan 23 09:20:19 crc kubenswrapper[4982]: E0123 09:20:19.621061 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9243b23-c93c-4495-9657-84cf3e077475" containerName="mariadb-account-create-update" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.621079 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9243b23-c93c-4495-9657-84cf3e077475" containerName="mariadb-account-create-update" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.621283 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9243b23-c93c-4495-9657-84cf3e077475" containerName="mariadb-account-create-update" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.622244 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.682273 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-4284v"] Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.718797 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-dns-svc\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.718969 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f57f9\" (UniqueName: \"kubernetes.io/projected/13a750f3-a134-414b-bc76-db62466c591a-kube-api-access-f57f9\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.719035 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-config\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.820883 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f57f9\" (UniqueName: \"kubernetes.io/projected/13a750f3-a134-414b-bc76-db62466c591a-kube-api-access-f57f9\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.820995 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-config\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.821040 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-dns-svc\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.822110 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-config\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.822220 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-dns-svc\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.846527 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f57f9\" (UniqueName: \"kubernetes.io/projected/13a750f3-a134-414b-bc76-db62466c591a-kube-api-access-f57f9\") pod \"dnsmasq-dns-699964fbc-4284v\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:19 crc kubenswrapper[4982]: I0123 09:20:19.986764 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:20 crc kubenswrapper[4982]: I0123 09:20:20.392362 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:20:20 crc kubenswrapper[4982]: I0123 09:20:20.455474 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-4284v"] Jan 23 09:20:20 crc kubenswrapper[4982]: I0123 09:20:20.696253 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-4284v" event={"ID":"13a750f3-a134-414b-bc76-db62466c591a","Type":"ContainerStarted","Data":"968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e"} Jan 23 09:20:20 crc kubenswrapper[4982]: I0123 09:20:20.696540 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-4284v" event={"ID":"13a750f3-a134-414b-bc76-db62466c591a","Type":"ContainerStarted","Data":"cef7b8c93ba54f9784740a6d940550a41b8a869e290c4c2224f901189f4f94d8"} Jan 23 09:20:21 crc kubenswrapper[4982]: I0123 09:20:21.232228 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:20:21 crc kubenswrapper[4982]: I0123 09:20:21.706228 4982 generic.go:334] "Generic (PLEG): container finished" podID="13a750f3-a134-414b-bc76-db62466c591a" containerID="968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e" exitCode=0 Jan 23 09:20:21 crc kubenswrapper[4982]: I0123 09:20:21.706274 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-4284v" event={"ID":"13a750f3-a134-414b-bc76-db62466c591a","Type":"ContainerDied","Data":"968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e"} Jan 23 09:20:22 crc kubenswrapper[4982]: I0123 09:20:22.716491 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-4284v" event={"ID":"13a750f3-a134-414b-bc76-db62466c591a","Type":"ContainerStarted","Data":"e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50"} Jan 23 09:20:22 crc kubenswrapper[4982]: I0123 09:20:22.717115 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:22 crc kubenswrapper[4982]: I0123 09:20:22.742578 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-4284v" podStartSLOduration=3.742559811 podStartE2EDuration="3.742559811s" podCreationTimestamp="2026-01-23 09:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:20:22.742389137 +0000 UTC m=+5097.222752523" watchObservedRunningTime="2026-01-23 09:20:22.742559811 +0000 UTC m=+5097.222923197" Jan 23 09:20:24 crc kubenswrapper[4982]: I0123 09:20:24.975257 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerName="rabbitmq" containerID="cri-o://f56d592e0fde14ff17e334a7f17a1dfd89cd99dea7c59e16681944597c8f94a3" gracePeriod=604796 Jan 23 09:20:26 crc kubenswrapper[4982]: I0123 09:20:26.161504 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4e040b9e-9f50-402c-a948-933e03508f1f" containerName="rabbitmq" containerID="cri-o://7c3af71254f8884eb889b0638250a0d255552f5ffa88cc688a3813858aa195c0" gracePeriod=604796 Jan 23 09:20:29 crc kubenswrapper[4982]: I0123 09:20:29.987795 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.046307 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-gjp94"] Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.046712 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" podUID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerName="dnsmasq-dns" containerID="cri-o://e1f1d453ec4370ea0044563942c654509bda46ef93dcc7f6fc72a2e08b9a1107" gracePeriod=10 Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.470418 4982 generic.go:334] "Generic (PLEG): container finished" podID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerID="e1f1d453ec4370ea0044563942c654509bda46ef93dcc7f6fc72a2e08b9a1107" exitCode=0 Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.470641 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" event={"ID":"79436316-1c64-44cb-97a9-e2b000fdd57c","Type":"ContainerDied","Data":"e1f1d453ec4370ea0044563942c654509bda46ef93dcc7f6fc72a2e08b9a1107"} Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.656175 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.728087 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6gnv\" (UniqueName: \"kubernetes.io/projected/79436316-1c64-44cb-97a9-e2b000fdd57c-kube-api-access-l6gnv\") pod \"79436316-1c64-44cb-97a9-e2b000fdd57c\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.728145 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-dns-svc\") pod \"79436316-1c64-44cb-97a9-e2b000fdd57c\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.728287 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-config\") pod \"79436316-1c64-44cb-97a9-e2b000fdd57c\" (UID: \"79436316-1c64-44cb-97a9-e2b000fdd57c\") " Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.733974 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79436316-1c64-44cb-97a9-e2b000fdd57c-kube-api-access-l6gnv" (OuterVolumeSpecName: "kube-api-access-l6gnv") pod "79436316-1c64-44cb-97a9-e2b000fdd57c" (UID: "79436316-1c64-44cb-97a9-e2b000fdd57c"). InnerVolumeSpecName "kube-api-access-l6gnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.773274 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79436316-1c64-44cb-97a9-e2b000fdd57c" (UID: "79436316-1c64-44cb-97a9-e2b000fdd57c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.773290 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-config" (OuterVolumeSpecName: "config") pod "79436316-1c64-44cb-97a9-e2b000fdd57c" (UID: "79436316-1c64-44cb-97a9-e2b000fdd57c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.831374 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6gnv\" (UniqueName: \"kubernetes.io/projected/79436316-1c64-44cb-97a9-e2b000fdd57c-kube-api-access-l6gnv\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.831405 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:30 crc kubenswrapper[4982]: I0123 09:20:30.831414 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79436316-1c64-44cb-97a9-e2b000fdd57c-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.479128 4982 generic.go:334] "Generic (PLEG): container finished" podID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerID="f56d592e0fde14ff17e334a7f17a1dfd89cd99dea7c59e16681944597c8f94a3" exitCode=0 Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.479223 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ca879e6-d28a-49e6-8969-fb255ed70837","Type":"ContainerDied","Data":"f56d592e0fde14ff17e334a7f17a1dfd89cd99dea7c59e16681944597c8f94a3"} Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.479273 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7ca879e6-d28a-49e6-8969-fb255ed70837","Type":"ContainerDied","Data":"6d711b99424d14f5d904dca61e6419e38cd74b36eab3338e576dd19cbcb24e67"} Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.479288 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d711b99424d14f5d904dca61e6419e38cd74b36eab3338e576dd19cbcb24e67" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.481214 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" event={"ID":"79436316-1c64-44cb-97a9-e2b000fdd57c","Type":"ContainerDied","Data":"5ba5f74e9f349b2784934620212660ae55909d837b6e60ee2f3b1a266f1d9593"} Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.481243 4982 scope.go:117] "RemoveContainer" containerID="e1f1d453ec4370ea0044563942c654509bda46ef93dcc7f6fc72a2e08b9a1107" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.481366 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-gjp94" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.528541 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.536386 4982 scope.go:117] "RemoveContainer" containerID="568f4aa4d851c305c31ccf1539e85fd1436d5a749d9b038e07327d71d50ad088" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.540160 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-gjp94"] Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.550153 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-gjp94"] Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.649777 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-config-data\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.649847 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-erlang-cookie\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.649902 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ca879e6-d28a-49e6-8969-fb255ed70837-erlang-cookie-secret\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650065 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650096 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrr5p\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-kube-api-access-wrr5p\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650166 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-plugins\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650202 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-confd\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650238 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-server-conf\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650284 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-plugins-conf\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650309 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-tls\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.650333 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ca879e6-d28a-49e6-8969-fb255ed70837-pod-info\") pod \"7ca879e6-d28a-49e6-8969-fb255ed70837\" (UID: \"7ca879e6-d28a-49e6-8969-fb255ed70837\") " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.652401 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.653478 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.653964 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.656498 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7ca879e6-d28a-49e6-8969-fb255ed70837-pod-info" (OuterVolumeSpecName: "pod-info") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.657822 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-kube-api-access-wrr5p" (OuterVolumeSpecName: "kube-api-access-wrr5p") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "kube-api-access-wrr5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.658888 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.662502 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ca879e6-d28a-49e6-8969-fb255ed70837-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.673614 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da" (OuterVolumeSpecName: "persistence") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "pvc-f194ee90-157b-48e3-9baf-792ac5f445da". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.684777 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-config-data" (OuterVolumeSpecName: "config-data") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.706003 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-server-conf" (OuterVolumeSpecName: "server-conf") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752226 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752260 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752271 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ca879e6-d28a-49e6-8969-fb255ed70837-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752302 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") on node \"crc\" " Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752313 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrr5p\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-kube-api-access-wrr5p\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752322 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752330 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752339 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ca879e6-d28a-49e6-8969-fb255ed70837-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752346 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.752355 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ca879e6-d28a-49e6-8969-fb255ed70837-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.766891 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7ca879e6-d28a-49e6-8969-fb255ed70837" (UID: "7ca879e6-d28a-49e6-8969-fb255ed70837"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.777202 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.777387 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f194ee90-157b-48e3-9baf-792ac5f445da" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da") on node "crc" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.841386 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79436316-1c64-44cb-97a9-e2b000fdd57c" path="/var/lib/kubelet/pods/79436316-1c64-44cb-97a9-e2b000fdd57c/volumes" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.853849 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:31 crc kubenswrapper[4982]: I0123 09:20:31.854350 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ca879e6-d28a-49e6-8969-fb255ed70837-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.492672 4982 generic.go:334] "Generic (PLEG): container finished" podID="4e040b9e-9f50-402c-a948-933e03508f1f" containerID="7c3af71254f8884eb889b0638250a0d255552f5ffa88cc688a3813858aa195c0" exitCode=0 Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.492775 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e040b9e-9f50-402c-a948-933e03508f1f","Type":"ContainerDied","Data":"7c3af71254f8884eb889b0638250a0d255552f5ffa88cc688a3813858aa195c0"} Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.494113 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.523280 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.536252 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.559565 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:20:32 crc kubenswrapper[4982]: E0123 09:20:32.559979 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerName="dnsmasq-dns" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.559996 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerName="dnsmasq-dns" Jan 23 09:20:32 crc kubenswrapper[4982]: E0123 09:20:32.560021 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerName="rabbitmq" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.560027 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerName="rabbitmq" Jan 23 09:20:32 crc kubenswrapper[4982]: E0123 09:20:32.560049 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerName="init" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.560055 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerName="init" Jan 23 09:20:32 crc kubenswrapper[4982]: E0123 09:20:32.560064 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerName="setup-container" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.560072 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerName="setup-container" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.560232 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="79436316-1c64-44cb-97a9-e2b000fdd57c" containerName="dnsmasq-dns" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.560256 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca879e6-d28a-49e6-8969-fb255ed70837" containerName="rabbitmq" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.561177 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.566916 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.567149 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d8csc" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.567260 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.567377 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.568117 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.568667 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.576236 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.602553 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669020 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4da68286-4aaa-4c7e-8738-e41bdc402dc1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669082 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669113 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-config-data\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669248 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqg8\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-kube-api-access-whqg8\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669331 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669359 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4da68286-4aaa-4c7e-8738-e41bdc402dc1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669392 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669431 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.669454 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.753986 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.770935 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.770999 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771027 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771092 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4da68286-4aaa-4c7e-8738-e41bdc402dc1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771117 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771140 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771203 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-config-data\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771233 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqg8\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-kube-api-access-whqg8\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771290 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771319 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.771347 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4da68286-4aaa-4c7e-8738-e41bdc402dc1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.772055 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.772294 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.773039 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.774713 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.775356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4da68286-4aaa-4c7e-8738-e41bdc402dc1-config-data\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.783553 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.791572 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4da68286-4aaa-4c7e-8738-e41bdc402dc1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.795144 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.800304 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.800413 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4da68286-4aaa-4c7e-8738-e41bdc402dc1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.800437 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/99c20e4339a090324623a0ffead992da8ff761983eb00b9d427c52c490858a96/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.806748 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqg8\" (UniqueName: \"kubernetes.io/projected/4da68286-4aaa-4c7e-8738-e41bdc402dc1-kube-api-access-whqg8\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872156 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-confd\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872313 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872335 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-config-data\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872477 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e040b9e-9f50-402c-a948-933e03508f1f-pod-info\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872519 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-plugins-conf\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872563 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-plugins\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872619 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-server-conf\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872687 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-erlang-cookie\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872730 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e040b9e-9f50-402c-a948-933e03508f1f-erlang-cookie-secret\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872767 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxm7z\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-kube-api-access-xxm7z\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.872781 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-tls\") pod \"4e040b9e-9f50-402c-a948-933e03508f1f\" (UID: \"4e040b9e-9f50-402c-a948-933e03508f1f\") " Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.874370 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.874606 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.875729 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f194ee90-157b-48e3-9baf-792ac5f445da\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f194ee90-157b-48e3-9baf-792ac5f445da\") pod \"rabbitmq-server-0\" (UID: \"4da68286-4aaa-4c7e-8738-e41bdc402dc1\") " pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.878682 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.878742 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-kube-api-access-xxm7z" (OuterVolumeSpecName: "kube-api-access-xxm7z") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "kube-api-access-xxm7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.879179 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4e040b9e-9f50-402c-a948-933e03508f1f-pod-info" (OuterVolumeSpecName: "pod-info") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.880923 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e040b9e-9f50-402c-a948-933e03508f1f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.881048 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.902978 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0" (OuterVolumeSpecName: "persistence") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.903132 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-config-data" (OuterVolumeSpecName: "config-data") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.912824 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.938938 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-server-conf" (OuterVolumeSpecName: "server-conf") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975190 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975229 4982 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e040b9e-9f50-402c-a948-933e03508f1f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975239 4982 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975247 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975257 4982 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e040b9e-9f50-402c-a948-933e03508f1f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975267 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975278 4982 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e040b9e-9f50-402c-a948-933e03508f1f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975289 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxm7z\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-kube-api-access-xxm7z\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975298 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:32 crc kubenswrapper[4982]: I0123 09:20:32.975338 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") on node \"crc\" " Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.011800 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4e040b9e-9f50-402c-a948-933e03508f1f" (UID: "4e040b9e-9f50-402c-a948-933e03508f1f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.032071 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.032235 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0") on node "crc" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.077042 4982 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e040b9e-9f50-402c-a948-933e03508f1f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.077078 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.431089 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.504111 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4e040b9e-9f50-402c-a948-933e03508f1f","Type":"ContainerDied","Data":"d10195beeccc89c73525fd6fae43c49689f6219d35e1614b2256d719950874f0"} Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.504168 4982 scope.go:117] "RemoveContainer" containerID="7c3af71254f8884eb889b0638250a0d255552f5ffa88cc688a3813858aa195c0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.504313 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.515160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4da68286-4aaa-4c7e-8738-e41bdc402dc1","Type":"ContainerStarted","Data":"2027b24091fc519329325ed7b09539c6a5ec1f746875b5420779374a4e442c9e"} Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.550113 4982 scope.go:117] "RemoveContainer" containerID="a7f81449bc3ef19c5850c901e11a0e3c6d9fbed12fb3d1b87e2e9e83c5774b83" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.596648 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.621250 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.631998 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:20:33 crc kubenswrapper[4982]: E0123 09:20:33.632398 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e040b9e-9f50-402c-a948-933e03508f1f" containerName="rabbitmq" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.632416 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e040b9e-9f50-402c-a948-933e03508f1f" containerName="rabbitmq" Jan 23 09:20:33 crc kubenswrapper[4982]: E0123 09:20:33.632438 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e040b9e-9f50-402c-a948-933e03508f1f" containerName="setup-container" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.632446 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e040b9e-9f50-402c-a948-933e03508f1f" containerName="setup-container" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.632756 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e040b9e-9f50-402c-a948-933e03508f1f" containerName="rabbitmq" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.633862 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.636678 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.636845 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.636933 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.637023 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-pnbgj" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.637138 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.637173 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.637286 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.642350 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685698 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685785 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685815 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685857 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65dd14b2-00d6-4e7f-a06d-c47397d9236a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685890 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwhv\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-kube-api-access-pfwhv\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685929 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65dd14b2-00d6-4e7f-a06d-c47397d9236a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685954 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.685973 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.686007 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.686037 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.686075 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787033 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787106 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65dd14b2-00d6-4e7f-a06d-c47397d9236a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787183 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfwhv\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-kube-api-access-pfwhv\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787204 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65dd14b2-00d6-4e7f-a06d-c47397d9236a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787227 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787245 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787272 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787301 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787333 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.787373 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.788093 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.788120 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.788658 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.788677 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.788963 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/65dd14b2-00d6-4e7f-a06d-c47397d9236a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.798251 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/65dd14b2-00d6-4e7f-a06d-c47397d9236a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.798486 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.799007 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.800205 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/65dd14b2-00d6-4e7f-a06d-c47397d9236a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.823557 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfwhv\" (UniqueName: \"kubernetes.io/projected/65dd14b2-00d6-4e7f-a06d-c47397d9236a-kube-api-access-pfwhv\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.825336 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.825375 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a22909b2bfa6df63fa177c81dae745ab981de181b95c9d337a19f412ba2381ad/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.831399 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:20:33 crc kubenswrapper[4982]: E0123 09:20:33.831655 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.843486 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e040b9e-9f50-402c-a948-933e03508f1f" path="/var/lib/kubelet/pods/4e040b9e-9f50-402c-a948-933e03508f1f/volumes" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.844783 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca879e6-d28a-49e6-8969-fb255ed70837" path="/var/lib/kubelet/pods/7ca879e6-d28a-49e6-8969-fb255ed70837/volumes" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.873723 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d02c39ab-2e5f-41ad-b407-a00b02a6c4e0\") pod \"rabbitmq-cell1-server-0\" (UID: \"65dd14b2-00d6-4e7f-a06d-c47397d9236a\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:33 crc kubenswrapper[4982]: I0123 09:20:33.963242 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:20:34 crc kubenswrapper[4982]: I0123 09:20:34.394175 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 09:20:34 crc kubenswrapper[4982]: W0123 09:20:34.459410 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65dd14b2_00d6_4e7f_a06d_c47397d9236a.slice/crio-734e881a1d29739aa55566543b7645d1cd7a521dfca95644bc791d70a5003adf WatchSource:0}: Error finding container 734e881a1d29739aa55566543b7645d1cd7a521dfca95644bc791d70a5003adf: Status 404 returned error can't find the container with id 734e881a1d29739aa55566543b7645d1cd7a521dfca95644bc791d70a5003adf Jan 23 09:20:34 crc kubenswrapper[4982]: I0123 09:20:34.524584 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65dd14b2-00d6-4e7f-a06d-c47397d9236a","Type":"ContainerStarted","Data":"734e881a1d29739aa55566543b7645d1cd7a521dfca95644bc791d70a5003adf"} Jan 23 09:20:35 crc kubenswrapper[4982]: I0123 09:20:35.536806 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4da68286-4aaa-4c7e-8738-e41bdc402dc1","Type":"ContainerStarted","Data":"4b6effb6dca2b21a3977e1548c2c1478830c0dee5ef14ebd17859aaa1f983aee"} Jan 23 09:20:36 crc kubenswrapper[4982]: I0123 09:20:36.546910 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65dd14b2-00d6-4e7f-a06d-c47397d9236a","Type":"ContainerStarted","Data":"dafd5d596b7a588ff84ed5fb2b893e3c7dcbc28180a085d9cc4a4318a11f33e8"} Jan 23 09:20:47 crc kubenswrapper[4982]: I0123 09:20:47.827735 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:20:47 crc kubenswrapper[4982]: E0123 09:20:47.828518 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:21:01 crc kubenswrapper[4982]: I0123 09:21:01.826952 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:21:01 crc kubenswrapper[4982]: E0123 09:21:01.827707 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:21:07 crc kubenswrapper[4982]: I0123 09:21:07.790882 4982 generic.go:334] "Generic (PLEG): container finished" podID="4da68286-4aaa-4c7e-8738-e41bdc402dc1" containerID="4b6effb6dca2b21a3977e1548c2c1478830c0dee5ef14ebd17859aaa1f983aee" exitCode=0 Jan 23 09:21:07 crc kubenswrapper[4982]: I0123 09:21:07.791000 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4da68286-4aaa-4c7e-8738-e41bdc402dc1","Type":"ContainerDied","Data":"4b6effb6dca2b21a3977e1548c2c1478830c0dee5ef14ebd17859aaa1f983aee"} Jan 23 09:21:08 crc kubenswrapper[4982]: I0123 09:21:08.824990 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4da68286-4aaa-4c7e-8738-e41bdc402dc1","Type":"ContainerStarted","Data":"daae334d53c66f86dd599f6f84f421d0835822b68de449c82867e6b06a2eb3a6"} Jan 23 09:21:08 crc kubenswrapper[4982]: I0123 09:21:08.826162 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 09:21:08 crc kubenswrapper[4982]: I0123 09:21:08.828055 4982 generic.go:334] "Generic (PLEG): container finished" podID="65dd14b2-00d6-4e7f-a06d-c47397d9236a" containerID="dafd5d596b7a588ff84ed5fb2b893e3c7dcbc28180a085d9cc4a4318a11f33e8" exitCode=0 Jan 23 09:21:08 crc kubenswrapper[4982]: I0123 09:21:08.828163 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65dd14b2-00d6-4e7f-a06d-c47397d9236a","Type":"ContainerDied","Data":"dafd5d596b7a588ff84ed5fb2b893e3c7dcbc28180a085d9cc4a4318a11f33e8"} Jan 23 09:21:08 crc kubenswrapper[4982]: I0123 09:21:08.873585 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.873565082 podStartE2EDuration="36.873565082s" podCreationTimestamp="2026-01-23 09:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:21:08.861059626 +0000 UTC m=+5143.341423012" watchObservedRunningTime="2026-01-23 09:21:08.873565082 +0000 UTC m=+5143.353928468" Jan 23 09:21:09 crc kubenswrapper[4982]: I0123 09:21:09.838259 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"65dd14b2-00d6-4e7f-a06d-c47397d9236a","Type":"ContainerStarted","Data":"9684652933cf0a570af2b34bce6131c6b2623231fd9790d9493d8d81891ddea8"} Jan 23 09:21:09 crc kubenswrapper[4982]: I0123 09:21:09.838541 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:21:09 crc kubenswrapper[4982]: I0123 09:21:09.874002 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.873970288 podStartE2EDuration="36.873970288s" podCreationTimestamp="2026-01-23 09:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:21:09.864155304 +0000 UTC m=+5144.344518690" watchObservedRunningTime="2026-01-23 09:21:09.873970288 +0000 UTC m=+5144.354333704" Jan 23 09:21:12 crc kubenswrapper[4982]: I0123 09:21:12.827593 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:21:12 crc kubenswrapper[4982]: E0123 09:21:12.828031 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:21:22 crc kubenswrapper[4982]: I0123 09:21:22.916821 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 09:21:23 crc kubenswrapper[4982]: I0123 09:21:23.966803 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 09:21:26 crc kubenswrapper[4982]: I0123 09:21:26.828010 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:21:26 crc kubenswrapper[4982]: E0123 09:21:26.828466 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.440157 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.441264 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.443234 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rfqmt" Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.449360 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.492636 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcnfv\" (UniqueName: \"kubernetes.io/projected/460811db-f1be-4365-8c70-81084971e6cc-kube-api-access-dcnfv\") pod \"mariadb-client\" (UID: \"460811db-f1be-4365-8c70-81084971e6cc\") " pod="openstack/mariadb-client" Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.594745 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcnfv\" (UniqueName: \"kubernetes.io/projected/460811db-f1be-4365-8c70-81084971e6cc-kube-api-access-dcnfv\") pod \"mariadb-client\" (UID: \"460811db-f1be-4365-8c70-81084971e6cc\") " pod="openstack/mariadb-client" Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.613880 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcnfv\" (UniqueName: \"kubernetes.io/projected/460811db-f1be-4365-8c70-81084971e6cc-kube-api-access-dcnfv\") pod \"mariadb-client\" (UID: \"460811db-f1be-4365-8c70-81084971e6cc\") " pod="openstack/mariadb-client" Jan 23 09:21:27 crc kubenswrapper[4982]: I0123 09:21:27.773715 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:21:28 crc kubenswrapper[4982]: I0123 09:21:28.295857 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:21:28 crc kubenswrapper[4982]: I0123 09:21:28.301472 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:21:28 crc kubenswrapper[4982]: I0123 09:21:28.985438 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"460811db-f1be-4365-8c70-81084971e6cc","Type":"ContainerStarted","Data":"2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c"} Jan 23 09:21:28 crc kubenswrapper[4982]: I0123 09:21:28.985827 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"460811db-f1be-4365-8c70-81084971e6cc","Type":"ContainerStarted","Data":"059c161cb5f3f859d11413a3ad4e4461c4e401a1026bc6ecac289e4b5cc29a0c"} Jan 23 09:21:29 crc kubenswrapper[4982]: I0123 09:21:29.005044 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.510316718 podStartE2EDuration="2.004998768s" podCreationTimestamp="2026-01-23 09:21:27 +0000 UTC" firstStartedPulling="2026-01-23 09:21:28.295510207 +0000 UTC m=+5162.775873593" lastFinishedPulling="2026-01-23 09:21:28.790192257 +0000 UTC m=+5163.270555643" observedRunningTime="2026-01-23 09:21:29.001788802 +0000 UTC m=+5163.482152218" watchObservedRunningTime="2026-01-23 09:21:29.004998768 +0000 UTC m=+5163.485362174" Jan 23 09:21:41 crc kubenswrapper[4982]: I0123 09:21:41.827735 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:21:41 crc kubenswrapper[4982]: E0123 09:21:41.828419 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:21:42 crc kubenswrapper[4982]: I0123 09:21:42.516527 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:21:42 crc kubenswrapper[4982]: I0123 09:21:42.517018 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="460811db-f1be-4365-8c70-81084971e6cc" containerName="mariadb-client" containerID="cri-o://2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c" gracePeriod=30 Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.031132 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.101050 4982 generic.go:334] "Generic (PLEG): container finished" podID="460811db-f1be-4365-8c70-81084971e6cc" containerID="2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c" exitCode=143 Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.101101 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"460811db-f1be-4365-8c70-81084971e6cc","Type":"ContainerDied","Data":"2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c"} Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.101132 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"460811db-f1be-4365-8c70-81084971e6cc","Type":"ContainerDied","Data":"059c161cb5f3f859d11413a3ad4e4461c4e401a1026bc6ecac289e4b5cc29a0c"} Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.101154 4982 scope.go:117] "RemoveContainer" containerID="2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c" Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.101105 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.120605 4982 scope.go:117] "RemoveContainer" containerID="2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c" Jan 23 09:21:43 crc kubenswrapper[4982]: E0123 09:21:43.121169 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c\": container with ID starting with 2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c not found: ID does not exist" containerID="2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c" Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.121203 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c"} err="failed to get container status \"2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c\": rpc error: code = NotFound desc = could not find container \"2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c\": container with ID starting with 2c50281c3bd751eef637f5237e306ab5029333e8dbfc3ffb95c37877d9e1098c not found: ID does not exist" Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.160275 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcnfv\" (UniqueName: \"kubernetes.io/projected/460811db-f1be-4365-8c70-81084971e6cc-kube-api-access-dcnfv\") pod \"460811db-f1be-4365-8c70-81084971e6cc\" (UID: \"460811db-f1be-4365-8c70-81084971e6cc\") " Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.165286 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460811db-f1be-4365-8c70-81084971e6cc-kube-api-access-dcnfv" (OuterVolumeSpecName: "kube-api-access-dcnfv") pod "460811db-f1be-4365-8c70-81084971e6cc" (UID: "460811db-f1be-4365-8c70-81084971e6cc"). InnerVolumeSpecName "kube-api-access-dcnfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.262446 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcnfv\" (UniqueName: \"kubernetes.io/projected/460811db-f1be-4365-8c70-81084971e6cc-kube-api-access-dcnfv\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.430370 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.437529 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:21:43 crc kubenswrapper[4982]: I0123 09:21:43.842234 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460811db-f1be-4365-8c70-81084971e6cc" path="/var/lib/kubelet/pods/460811db-f1be-4365-8c70-81084971e6cc/volumes" Jan 23 09:21:56 crc kubenswrapper[4982]: I0123 09:21:56.827962 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:21:56 crc kubenswrapper[4982]: E0123 09:21:56.828545 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:22:09 crc kubenswrapper[4982]: I0123 09:22:09.827364 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:22:09 crc kubenswrapper[4982]: E0123 09:22:09.828050 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:22:22 crc kubenswrapper[4982]: I0123 09:22:22.827551 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:22:22 crc kubenswrapper[4982]: E0123 09:22:22.828346 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:22:35 crc kubenswrapper[4982]: I0123 09:22:35.832549 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:22:35 crc kubenswrapper[4982]: E0123 09:22:35.833228 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:22:48 crc kubenswrapper[4982]: I0123 09:22:48.827861 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:22:48 crc kubenswrapper[4982]: E0123 09:22:48.828798 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:23:02 crc kubenswrapper[4982]: I0123 09:23:02.827450 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:23:02 crc kubenswrapper[4982]: E0123 09:23:02.828389 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:23:16 crc kubenswrapper[4982]: I0123 09:23:16.828117 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:23:16 crc kubenswrapper[4982]: E0123 09:23:16.828972 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:23:29 crc kubenswrapper[4982]: I0123 09:23:29.827897 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:23:29 crc kubenswrapper[4982]: E0123 09:23:29.828680 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:23:42 crc kubenswrapper[4982]: I0123 09:23:42.827535 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:23:42 crc kubenswrapper[4982]: E0123 09:23:42.828263 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:23:43 crc kubenswrapper[4982]: I0123 09:23:43.122916 4982 scope.go:117] "RemoveContainer" containerID="4cb6e01d4590f9238e1a8e5bb2978854e9950c7d5e72285195818b8ca3e1abf5" Jan 23 09:23:56 crc kubenswrapper[4982]: I0123 09:23:56.828438 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:23:56 crc kubenswrapper[4982]: E0123 09:23:56.829980 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.732961 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmvnv"] Jan 23 09:24:03 crc kubenswrapper[4982]: E0123 09:24:03.733898 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460811db-f1be-4365-8c70-81084971e6cc" containerName="mariadb-client" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.733914 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="460811db-f1be-4365-8c70-81084971e6cc" containerName="mariadb-client" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.734162 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="460811db-f1be-4365-8c70-81084971e6cc" containerName="mariadb-client" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.736118 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.745895 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmvnv"] Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.811823 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-utilities\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.811897 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-catalog-content\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.811936 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvqvw\" (UniqueName: \"kubernetes.io/projected/77325f33-c57d-48d8-be2a-a1afb054e4e8-kube-api-access-rvqvw\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.913653 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-utilities\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.913724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-catalog-content\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.913758 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvqvw\" (UniqueName: \"kubernetes.io/projected/77325f33-c57d-48d8-be2a-a1afb054e4e8-kube-api-access-rvqvw\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.915370 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-utilities\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.915735 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-catalog-content\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:03 crc kubenswrapper[4982]: I0123 09:24:03.940705 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvqvw\" (UniqueName: \"kubernetes.io/projected/77325f33-c57d-48d8-be2a-a1afb054e4e8-kube-api-access-rvqvw\") pod \"certified-operators-nmvnv\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:04 crc kubenswrapper[4982]: I0123 09:24:04.060561 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:04 crc kubenswrapper[4982]: I0123 09:24:04.392229 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmvnv"] Jan 23 09:24:05 crc kubenswrapper[4982]: I0123 09:24:05.338127 4982 generic.go:334] "Generic (PLEG): container finished" podID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerID="910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6" exitCode=0 Jan 23 09:24:05 crc kubenswrapper[4982]: I0123 09:24:05.338232 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvnv" event={"ID":"77325f33-c57d-48d8-be2a-a1afb054e4e8","Type":"ContainerDied","Data":"910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6"} Jan 23 09:24:05 crc kubenswrapper[4982]: I0123 09:24:05.338397 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvnv" event={"ID":"77325f33-c57d-48d8-be2a-a1afb054e4e8","Type":"ContainerStarted","Data":"eba88e821536d19a14b7bade1d47dedc7d6d673b7d22cdaa1cf43bc2840ea628"} Jan 23 09:24:06 crc kubenswrapper[4982]: I0123 09:24:06.350402 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvnv" event={"ID":"77325f33-c57d-48d8-be2a-a1afb054e4e8","Type":"ContainerStarted","Data":"f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2"} Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.359842 4982 generic.go:334] "Generic (PLEG): container finished" podID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerID="f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2" exitCode=0 Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.359933 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvnv" event={"ID":"77325f33-c57d-48d8-be2a-a1afb054e4e8","Type":"ContainerDied","Data":"f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2"} Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.732747 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xdf8h"] Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.734703 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.741345 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdf8h"] Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.923084 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gqg\" (UniqueName: \"kubernetes.io/projected/17ccd007-390e-48e4-bee7-3cee1fa48bbf-kube-api-access-j5gqg\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.923230 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-catalog-content\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:07 crc kubenswrapper[4982]: I0123 09:24:07.923450 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-utilities\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.025348 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5gqg\" (UniqueName: \"kubernetes.io/projected/17ccd007-390e-48e4-bee7-3cee1fa48bbf-kube-api-access-j5gqg\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.025456 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-catalog-content\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.025527 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-utilities\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.026078 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-utilities\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.026151 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-catalog-content\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.050468 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5gqg\" (UniqueName: \"kubernetes.io/projected/17ccd007-390e-48e4-bee7-3cee1fa48bbf-kube-api-access-j5gqg\") pod \"redhat-marketplace-xdf8h\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.067798 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.371220 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdf8h"] Jan 23 09:24:08 crc kubenswrapper[4982]: W0123 09:24:08.375820 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17ccd007_390e_48e4_bee7_3cee1fa48bbf.slice/crio-dac71fd4ece3824c5be7656aaa54c765b47e23e29c1a4f194ade5d49d62204f6 WatchSource:0}: Error finding container dac71fd4ece3824c5be7656aaa54c765b47e23e29c1a4f194ade5d49d62204f6: Status 404 returned error can't find the container with id dac71fd4ece3824c5be7656aaa54c765b47e23e29c1a4f194ade5d49d62204f6 Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.377504 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvnv" event={"ID":"77325f33-c57d-48d8-be2a-a1afb054e4e8","Type":"ContainerStarted","Data":"e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa"} Jan 23 09:24:08 crc kubenswrapper[4982]: I0123 09:24:08.402385 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmvnv" podStartSLOduration=2.683631191 podStartE2EDuration="5.402360565s" podCreationTimestamp="2026-01-23 09:24:03 +0000 UTC" firstStartedPulling="2026-01-23 09:24:05.339863803 +0000 UTC m=+5319.820227189" lastFinishedPulling="2026-01-23 09:24:08.058593177 +0000 UTC m=+5322.538956563" observedRunningTime="2026-01-23 09:24:08.393795553 +0000 UTC m=+5322.874158939" watchObservedRunningTime="2026-01-23 09:24:08.402360565 +0000 UTC m=+5322.882723961" Jan 23 09:24:09 crc kubenswrapper[4982]: I0123 09:24:09.387701 4982 generic.go:334] "Generic (PLEG): container finished" podID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerID="710c842f4f8186e45dd05d636d370fe66d2a678d23b33a8e4752775f54cd8bbf" exitCode=0 Jan 23 09:24:09 crc kubenswrapper[4982]: I0123 09:24:09.389894 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdf8h" event={"ID":"17ccd007-390e-48e4-bee7-3cee1fa48bbf","Type":"ContainerDied","Data":"710c842f4f8186e45dd05d636d370fe66d2a678d23b33a8e4752775f54cd8bbf"} Jan 23 09:24:09 crc kubenswrapper[4982]: I0123 09:24:09.389972 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdf8h" event={"ID":"17ccd007-390e-48e4-bee7-3cee1fa48bbf","Type":"ContainerStarted","Data":"dac71fd4ece3824c5be7656aaa54c765b47e23e29c1a4f194ade5d49d62204f6"} Jan 23 09:24:10 crc kubenswrapper[4982]: I0123 09:24:10.827475 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:24:10 crc kubenswrapper[4982]: E0123 09:24:10.829191 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:24:11 crc kubenswrapper[4982]: I0123 09:24:11.408532 4982 generic.go:334] "Generic (PLEG): container finished" podID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerID="fc69303442ede4ed4d3e10d52a8d7bf971ae81d66cf2811b2ba709f67527c9a8" exitCode=0 Jan 23 09:24:11 crc kubenswrapper[4982]: I0123 09:24:11.408590 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdf8h" event={"ID":"17ccd007-390e-48e4-bee7-3cee1fa48bbf","Type":"ContainerDied","Data":"fc69303442ede4ed4d3e10d52a8d7bf971ae81d66cf2811b2ba709f67527c9a8"} Jan 23 09:24:12 crc kubenswrapper[4982]: I0123 09:24:12.417682 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdf8h" event={"ID":"17ccd007-390e-48e4-bee7-3cee1fa48bbf","Type":"ContainerStarted","Data":"717866c98f64dcaf3127e5362e7261cf34276bb29dbd109b2d79dc4a449369c3"} Jan 23 09:24:12 crc kubenswrapper[4982]: I0123 09:24:12.444990 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xdf8h" podStartSLOduration=2.7125709799999997 podStartE2EDuration="5.444971933s" podCreationTimestamp="2026-01-23 09:24:07 +0000 UTC" firstStartedPulling="2026-01-23 09:24:09.390842119 +0000 UTC m=+5323.871205535" lastFinishedPulling="2026-01-23 09:24:12.123243102 +0000 UTC m=+5326.603606488" observedRunningTime="2026-01-23 09:24:12.439017142 +0000 UTC m=+5326.919380538" watchObservedRunningTime="2026-01-23 09:24:12.444971933 +0000 UTC m=+5326.925335309" Jan 23 09:24:14 crc kubenswrapper[4982]: I0123 09:24:14.061060 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:14 crc kubenswrapper[4982]: I0123 09:24:14.062559 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:14 crc kubenswrapper[4982]: I0123 09:24:14.101849 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:14 crc kubenswrapper[4982]: I0123 09:24:14.474361 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:16 crc kubenswrapper[4982]: I0123 09:24:16.721173 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmvnv"] Jan 23 09:24:17 crc kubenswrapper[4982]: I0123 09:24:17.458273 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmvnv" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="registry-server" containerID="cri-o://e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa" gracePeriod=2 Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.068557 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.068940 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.115213 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.398064 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.471163 4982 generic.go:334] "Generic (PLEG): container finished" podID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerID="e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa" exitCode=0 Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.471233 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvnv" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.471280 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvnv" event={"ID":"77325f33-c57d-48d8-be2a-a1afb054e4e8","Type":"ContainerDied","Data":"e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa"} Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.471362 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvnv" event={"ID":"77325f33-c57d-48d8-be2a-a1afb054e4e8","Type":"ContainerDied","Data":"eba88e821536d19a14b7bade1d47dedc7d6d673b7d22cdaa1cf43bc2840ea628"} Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.471409 4982 scope.go:117] "RemoveContainer" containerID="e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.492206 4982 scope.go:117] "RemoveContainer" containerID="f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.514858 4982 scope.go:117] "RemoveContainer" containerID="910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.521348 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.557585 4982 scope.go:117] "RemoveContainer" containerID="e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa" Jan 23 09:24:18 crc kubenswrapper[4982]: E0123 09:24:18.558101 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa\": container with ID starting with e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa not found: ID does not exist" containerID="e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.558144 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa"} err="failed to get container status \"e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa\": rpc error: code = NotFound desc = could not find container \"e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa\": container with ID starting with e05618ecc9c1b995c636dbb76e7f55f6b555074d1bc22a69053083a2b1f99daa not found: ID does not exist" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.558163 4982 scope.go:117] "RemoveContainer" containerID="f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2" Jan 23 09:24:18 crc kubenswrapper[4982]: E0123 09:24:18.558606 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2\": container with ID starting with f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2 not found: ID does not exist" containerID="f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.558659 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2"} err="failed to get container status \"f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2\": rpc error: code = NotFound desc = could not find container \"f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2\": container with ID starting with f697ab2fa01752308b484872fc3fe9b0820cc6f26750e8152c544112b1b4f3b2 not found: ID does not exist" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.558685 4982 scope.go:117] "RemoveContainer" containerID="910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6" Jan 23 09:24:18 crc kubenswrapper[4982]: E0123 09:24:18.559097 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6\": container with ID starting with 910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6 not found: ID does not exist" containerID="910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.559147 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6"} err="failed to get container status \"910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6\": rpc error: code = NotFound desc = could not find container \"910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6\": container with ID starting with 910daed97b7c258188c8ddb577dd8ee1ce0de4030d8a1d6515daaf8c754b8ae6 not found: ID does not exist" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.598586 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-utilities\") pod \"77325f33-c57d-48d8-be2a-a1afb054e4e8\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.598684 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvqvw\" (UniqueName: \"kubernetes.io/projected/77325f33-c57d-48d8-be2a-a1afb054e4e8-kube-api-access-rvqvw\") pod \"77325f33-c57d-48d8-be2a-a1afb054e4e8\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.598747 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-catalog-content\") pod \"77325f33-c57d-48d8-be2a-a1afb054e4e8\" (UID: \"77325f33-c57d-48d8-be2a-a1afb054e4e8\") " Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.599747 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-utilities" (OuterVolumeSpecName: "utilities") pod "77325f33-c57d-48d8-be2a-a1afb054e4e8" (UID: "77325f33-c57d-48d8-be2a-a1afb054e4e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.605314 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77325f33-c57d-48d8-be2a-a1afb054e4e8-kube-api-access-rvqvw" (OuterVolumeSpecName: "kube-api-access-rvqvw") pod "77325f33-c57d-48d8-be2a-a1afb054e4e8" (UID: "77325f33-c57d-48d8-be2a-a1afb054e4e8"). InnerVolumeSpecName "kube-api-access-rvqvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.660462 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77325f33-c57d-48d8-be2a-a1afb054e4e8" (UID: "77325f33-c57d-48d8-be2a-a1afb054e4e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.700364 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.700405 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvqvw\" (UniqueName: \"kubernetes.io/projected/77325f33-c57d-48d8-be2a-a1afb054e4e8-kube-api-access-rvqvw\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.700417 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77325f33-c57d-48d8-be2a-a1afb054e4e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.809242 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmvnv"] Jan 23 09:24:18 crc kubenswrapper[4982]: I0123 09:24:18.820777 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmvnv"] Jan 23 09:24:19 crc kubenswrapper[4982]: I0123 09:24:19.842074 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" path="/var/lib/kubelet/pods/77325f33-c57d-48d8-be2a-a1afb054e4e8/volumes" Jan 23 09:24:22 crc kubenswrapper[4982]: I0123 09:24:22.121706 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdf8h"] Jan 23 09:24:22 crc kubenswrapper[4982]: I0123 09:24:22.122143 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xdf8h" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="registry-server" containerID="cri-o://717866c98f64dcaf3127e5362e7261cf34276bb29dbd109b2d79dc4a449369c3" gracePeriod=2 Jan 23 09:24:22 crc kubenswrapper[4982]: I0123 09:24:22.504972 4982 generic.go:334] "Generic (PLEG): container finished" podID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerID="717866c98f64dcaf3127e5362e7261cf34276bb29dbd109b2d79dc4a449369c3" exitCode=0 Jan 23 09:24:22 crc kubenswrapper[4982]: I0123 09:24:22.505013 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdf8h" event={"ID":"17ccd007-390e-48e4-bee7-3cee1fa48bbf","Type":"ContainerDied","Data":"717866c98f64dcaf3127e5362e7261cf34276bb29dbd109b2d79dc4a449369c3"} Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.101759 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.273853 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-utilities\") pod \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.274691 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-catalog-content\") pod \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.274723 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-utilities" (OuterVolumeSpecName: "utilities") pod "17ccd007-390e-48e4-bee7-3cee1fa48bbf" (UID: "17ccd007-390e-48e4-bee7-3cee1fa48bbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.274970 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5gqg\" (UniqueName: \"kubernetes.io/projected/17ccd007-390e-48e4-bee7-3cee1fa48bbf-kube-api-access-j5gqg\") pod \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\" (UID: \"17ccd007-390e-48e4-bee7-3cee1fa48bbf\") " Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.276510 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.281667 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ccd007-390e-48e4-bee7-3cee1fa48bbf-kube-api-access-j5gqg" (OuterVolumeSpecName: "kube-api-access-j5gqg") pod "17ccd007-390e-48e4-bee7-3cee1fa48bbf" (UID: "17ccd007-390e-48e4-bee7-3cee1fa48bbf"). InnerVolumeSpecName "kube-api-access-j5gqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.310794 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17ccd007-390e-48e4-bee7-3cee1fa48bbf" (UID: "17ccd007-390e-48e4-bee7-3cee1fa48bbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.378291 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5gqg\" (UniqueName: \"kubernetes.io/projected/17ccd007-390e-48e4-bee7-3cee1fa48bbf-kube-api-access-j5gqg\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.378334 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17ccd007-390e-48e4-bee7-3cee1fa48bbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.513598 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdf8h" event={"ID":"17ccd007-390e-48e4-bee7-3cee1fa48bbf","Type":"ContainerDied","Data":"dac71fd4ece3824c5be7656aaa54c765b47e23e29c1a4f194ade5d49d62204f6"} Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.513740 4982 scope.go:117] "RemoveContainer" containerID="717866c98f64dcaf3127e5362e7261cf34276bb29dbd109b2d79dc4a449369c3" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.513659 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdf8h" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.533191 4982 scope.go:117] "RemoveContainer" containerID="fc69303442ede4ed4d3e10d52a8d7bf971ae81d66cf2811b2ba709f67527c9a8" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.550394 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdf8h"] Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.557587 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdf8h"] Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.581013 4982 scope.go:117] "RemoveContainer" containerID="710c842f4f8186e45dd05d636d370fe66d2a678d23b33a8e4752775f54cd8bbf" Jan 23 09:24:23 crc kubenswrapper[4982]: I0123 09:24:23.838138 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" path="/var/lib/kubelet/pods/17ccd007-390e-48e4-bee7-3cee1fa48bbf/volumes" Jan 23 09:24:24 crc kubenswrapper[4982]: I0123 09:24:24.826922 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:24:24 crc kubenswrapper[4982]: E0123 09:24:24.827143 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:24:38 crc kubenswrapper[4982]: I0123 09:24:38.827878 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:24:38 crc kubenswrapper[4982]: E0123 09:24:38.828636 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:24:52 crc kubenswrapper[4982]: I0123 09:24:52.827746 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:24:53 crc kubenswrapper[4982]: I0123 09:24:53.794523 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"7fe69a70f46ce57d41d7e5c077b325e9c4d29babec4475eb09ec573022ab581a"} Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.450953 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 23 09:25:19 crc kubenswrapper[4982]: E0123 09:25:19.453055 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="registry-server" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453083 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="registry-server" Jan 23 09:25:19 crc kubenswrapper[4982]: E0123 09:25:19.453111 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="extract-content" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453120 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="extract-content" Jan 23 09:25:19 crc kubenswrapper[4982]: E0123 09:25:19.453150 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="extract-utilities" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453159 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="extract-utilities" Jan 23 09:25:19 crc kubenswrapper[4982]: E0123 09:25:19.453217 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="extract-utilities" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453228 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="extract-utilities" Jan 23 09:25:19 crc kubenswrapper[4982]: E0123 09:25:19.453249 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="extract-content" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453257 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="extract-content" Jan 23 09:25:19 crc kubenswrapper[4982]: E0123 09:25:19.453296 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="registry-server" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453305 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="registry-server" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453715 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ccd007-390e-48e4-bee7-3cee1fa48bbf" containerName="registry-server" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.453796 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="77325f33-c57d-48d8-be2a-a1afb054e4e8" containerName="registry-server" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.454465 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.456371 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rfqmt" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.460855 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.580219 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwm7\" (UniqueName: \"kubernetes.io/projected/61c17c05-4452-412a-9434-b1997807b03c-kube-api-access-bgwm7\") pod \"mariadb-copy-data\" (UID: \"61c17c05-4452-412a-9434-b1997807b03c\") " pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.580303 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\") pod \"mariadb-copy-data\" (UID: \"61c17c05-4452-412a-9434-b1997807b03c\") " pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.681988 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgwm7\" (UniqueName: \"kubernetes.io/projected/61c17c05-4452-412a-9434-b1997807b03c-kube-api-access-bgwm7\") pod \"mariadb-copy-data\" (UID: \"61c17c05-4452-412a-9434-b1997807b03c\") " pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.682099 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\") pod \"mariadb-copy-data\" (UID: \"61c17c05-4452-412a-9434-b1997807b03c\") " pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.686579 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.691649 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\") pod \"mariadb-copy-data\" (UID: \"61c17c05-4452-412a-9434-b1997807b03c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e18acb80e8cdcbc11623835f11943cd32f5048fb5587b709be126f8403649c88/globalmount\"" pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.705129 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgwm7\" (UniqueName: \"kubernetes.io/projected/61c17c05-4452-412a-9434-b1997807b03c-kube-api-access-bgwm7\") pod \"mariadb-copy-data\" (UID: \"61c17c05-4452-412a-9434-b1997807b03c\") " pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.726958 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-55b18d2b-5874-4fee-9e7f-2d0cb9cf2c5f\") pod \"mariadb-copy-data\" (UID: \"61c17c05-4452-412a-9434-b1997807b03c\") " pod="openstack/mariadb-copy-data" Jan 23 09:25:19 crc kubenswrapper[4982]: I0123 09:25:19.786847 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 23 09:25:20 crc kubenswrapper[4982]: I0123 09:25:20.319757 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 23 09:25:21 crc kubenswrapper[4982]: I0123 09:25:21.085884 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"61c17c05-4452-412a-9434-b1997807b03c","Type":"ContainerStarted","Data":"61e7d21ab4b3a0c284761fbe14df2722bd4837a82a242b8ca5efe8386715b9b5"} Jan 23 09:25:21 crc kubenswrapper[4982]: I0123 09:25:21.086206 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"61c17c05-4452-412a-9434-b1997807b03c","Type":"ContainerStarted","Data":"26395a84eb24df63dd45cd4cfb83c93ca0ebd16d8a73f27917c8c778839ca34c"} Jan 23 09:25:21 crc kubenswrapper[4982]: I0123 09:25:21.105941 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.105921859 podStartE2EDuration="3.105921859s" podCreationTimestamp="2026-01-23 09:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:25:21.097910732 +0000 UTC m=+5395.578274138" watchObservedRunningTime="2026-01-23 09:25:21.105921859 +0000 UTC m=+5395.586285245" Jan 23 09:25:23 crc kubenswrapper[4982]: I0123 09:25:23.956296 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:23 crc kubenswrapper[4982]: I0123 09:25:23.958083 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:23 crc kubenswrapper[4982]: I0123 09:25:23.965113 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:24 crc kubenswrapper[4982]: I0123 09:25:24.060905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m5ql\" (UniqueName: \"kubernetes.io/projected/09bb2252-2bc5-48b0-9aa1-30b1492b8956-kube-api-access-5m5ql\") pod \"mariadb-client\" (UID: \"09bb2252-2bc5-48b0-9aa1-30b1492b8956\") " pod="openstack/mariadb-client" Jan 23 09:25:24 crc kubenswrapper[4982]: I0123 09:25:24.162374 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m5ql\" (UniqueName: \"kubernetes.io/projected/09bb2252-2bc5-48b0-9aa1-30b1492b8956-kube-api-access-5m5ql\") pod \"mariadb-client\" (UID: \"09bb2252-2bc5-48b0-9aa1-30b1492b8956\") " pod="openstack/mariadb-client" Jan 23 09:25:24 crc kubenswrapper[4982]: I0123 09:25:24.179920 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m5ql\" (UniqueName: \"kubernetes.io/projected/09bb2252-2bc5-48b0-9aa1-30b1492b8956-kube-api-access-5m5ql\") pod \"mariadb-client\" (UID: \"09bb2252-2bc5-48b0-9aa1-30b1492b8956\") " pod="openstack/mariadb-client" Jan 23 09:25:24 crc kubenswrapper[4982]: I0123 09:25:24.281216 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:24 crc kubenswrapper[4982]: I0123 09:25:24.721196 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:24 crc kubenswrapper[4982]: W0123 09:25:24.730918 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09bb2252_2bc5_48b0_9aa1_30b1492b8956.slice/crio-2ddebeb08440852195377c9c4525b929fa0af050a84d00bdd55f1642171f9808 WatchSource:0}: Error finding container 2ddebeb08440852195377c9c4525b929fa0af050a84d00bdd55f1642171f9808: Status 404 returned error can't find the container with id 2ddebeb08440852195377c9c4525b929fa0af050a84d00bdd55f1642171f9808 Jan 23 09:25:25 crc kubenswrapper[4982]: I0123 09:25:25.117493 4982 generic.go:334] "Generic (PLEG): container finished" podID="09bb2252-2bc5-48b0-9aa1-30b1492b8956" containerID="3a77e4ee87e70a6dbb1528ce991a0bc9a7bb4e218d0598679de1cee0ff51ada8" exitCode=0 Jan 23 09:25:25 crc kubenswrapper[4982]: I0123 09:25:25.117554 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"09bb2252-2bc5-48b0-9aa1-30b1492b8956","Type":"ContainerDied","Data":"3a77e4ee87e70a6dbb1528ce991a0bc9a7bb4e218d0598679de1cee0ff51ada8"} Jan 23 09:25:25 crc kubenswrapper[4982]: I0123 09:25:25.117854 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"09bb2252-2bc5-48b0-9aa1-30b1492b8956","Type":"ContainerStarted","Data":"2ddebeb08440852195377c9c4525b929fa0af050a84d00bdd55f1642171f9808"} Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.418185 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.438794 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_09bb2252-2bc5-48b0-9aa1-30b1492b8956/mariadb-client/0.log" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.480497 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.488254 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.498475 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m5ql\" (UniqueName: \"kubernetes.io/projected/09bb2252-2bc5-48b0-9aa1-30b1492b8956-kube-api-access-5m5ql\") pod \"09bb2252-2bc5-48b0-9aa1-30b1492b8956\" (UID: \"09bb2252-2bc5-48b0-9aa1-30b1492b8956\") " Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.504236 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bb2252-2bc5-48b0-9aa1-30b1492b8956-kube-api-access-5m5ql" (OuterVolumeSpecName: "kube-api-access-5m5ql") pod "09bb2252-2bc5-48b0-9aa1-30b1492b8956" (UID: "09bb2252-2bc5-48b0-9aa1-30b1492b8956"). InnerVolumeSpecName "kube-api-access-5m5ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.601768 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m5ql\" (UniqueName: \"kubernetes.io/projected/09bb2252-2bc5-48b0-9aa1-30b1492b8956-kube-api-access-5m5ql\") on node \"crc\" DevicePath \"\"" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.609558 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:26 crc kubenswrapper[4982]: E0123 09:25:26.610112 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bb2252-2bc5-48b0-9aa1-30b1492b8956" containerName="mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.610133 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bb2252-2bc5-48b0-9aa1-30b1492b8956" containerName="mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.610375 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bb2252-2bc5-48b0-9aa1-30b1492b8956" containerName="mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.611131 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.618173 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.702988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549hg\" (UniqueName: \"kubernetes.io/projected/d2901cbe-e097-440a-b07d-0e8f068da7a2-kube-api-access-549hg\") pod \"mariadb-client\" (UID: \"d2901cbe-e097-440a-b07d-0e8f068da7a2\") " pod="openstack/mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.804990 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549hg\" (UniqueName: \"kubernetes.io/projected/d2901cbe-e097-440a-b07d-0e8f068da7a2-kube-api-access-549hg\") pod \"mariadb-client\" (UID: \"d2901cbe-e097-440a-b07d-0e8f068da7a2\") " pod="openstack/mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.822228 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-549hg\" (UniqueName: \"kubernetes.io/projected/d2901cbe-e097-440a-b07d-0e8f068da7a2-kube-api-access-549hg\") pod \"mariadb-client\" (UID: \"d2901cbe-e097-440a-b07d-0e8f068da7a2\") " pod="openstack/mariadb-client" Jan 23 09:25:26 crc kubenswrapper[4982]: I0123 09:25:26.935913 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:27 crc kubenswrapper[4982]: I0123 09:25:27.145303 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ddebeb08440852195377c9c4525b929fa0af050a84d00bdd55f1642171f9808" Jan 23 09:25:27 crc kubenswrapper[4982]: I0123 09:25:27.145392 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:27 crc kubenswrapper[4982]: I0123 09:25:27.164764 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="09bb2252-2bc5-48b0-9aa1-30b1492b8956" podUID="d2901cbe-e097-440a-b07d-0e8f068da7a2" Jan 23 09:25:27 crc kubenswrapper[4982]: I0123 09:25:27.431100 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:27 crc kubenswrapper[4982]: W0123 09:25:27.432264 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2901cbe_e097_440a_b07d_0e8f068da7a2.slice/crio-069631fd35f0fc6c404b922a6b075e336b38d0a43d284f3adde9422411f53313 WatchSource:0}: Error finding container 069631fd35f0fc6c404b922a6b075e336b38d0a43d284f3adde9422411f53313: Status 404 returned error can't find the container with id 069631fd35f0fc6c404b922a6b075e336b38d0a43d284f3adde9422411f53313 Jan 23 09:25:27 crc kubenswrapper[4982]: I0123 09:25:27.980201 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09bb2252-2bc5-48b0-9aa1-30b1492b8956" path="/var/lib/kubelet/pods/09bb2252-2bc5-48b0-9aa1-30b1492b8956/volumes" Jan 23 09:25:28 crc kubenswrapper[4982]: I0123 09:25:28.155231 4982 generic.go:334] "Generic (PLEG): container finished" podID="d2901cbe-e097-440a-b07d-0e8f068da7a2" containerID="fdcba534f41f1c88e3843a42912416e983a2b3810fdc1f3eeea21f445e29dc26" exitCode=0 Jan 23 09:25:28 crc kubenswrapper[4982]: I0123 09:25:28.155281 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d2901cbe-e097-440a-b07d-0e8f068da7a2","Type":"ContainerDied","Data":"fdcba534f41f1c88e3843a42912416e983a2b3810fdc1f3eeea21f445e29dc26"} Jan 23 09:25:28 crc kubenswrapper[4982]: I0123 09:25:28.155310 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"d2901cbe-e097-440a-b07d-0e8f068da7a2","Type":"ContainerStarted","Data":"069631fd35f0fc6c404b922a6b075e336b38d0a43d284f3adde9422411f53313"} Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.510903 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.528565 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_d2901cbe-e097-440a-b07d-0e8f068da7a2/mariadb-client/0.log" Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.563489 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.570521 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.648763 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-549hg\" (UniqueName: \"kubernetes.io/projected/d2901cbe-e097-440a-b07d-0e8f068da7a2-kube-api-access-549hg\") pod \"d2901cbe-e097-440a-b07d-0e8f068da7a2\" (UID: \"d2901cbe-e097-440a-b07d-0e8f068da7a2\") " Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.653531 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2901cbe-e097-440a-b07d-0e8f068da7a2-kube-api-access-549hg" (OuterVolumeSpecName: "kube-api-access-549hg") pod "d2901cbe-e097-440a-b07d-0e8f068da7a2" (UID: "d2901cbe-e097-440a-b07d-0e8f068da7a2"). InnerVolumeSpecName "kube-api-access-549hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.751482 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-549hg\" (UniqueName: \"kubernetes.io/projected/d2901cbe-e097-440a-b07d-0e8f068da7a2-kube-api-access-549hg\") on node \"crc\" DevicePath \"\"" Jan 23 09:25:29 crc kubenswrapper[4982]: I0123 09:25:29.838056 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2901cbe-e097-440a-b07d-0e8f068da7a2" path="/var/lib/kubelet/pods/d2901cbe-e097-440a-b07d-0e8f068da7a2/volumes" Jan 23 09:25:30 crc kubenswrapper[4982]: I0123 09:25:30.174583 4982 scope.go:117] "RemoveContainer" containerID="fdcba534f41f1c88e3843a42912416e983a2b3810fdc1f3eeea21f445e29dc26" Jan 23 09:25:30 crc kubenswrapper[4982]: I0123 09:25:30.174764 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 23 09:25:43 crc kubenswrapper[4982]: I0123 09:25:43.220699 4982 scope.go:117] "RemoveContainer" containerID="14dbb11df292eba166d7ce414e7680e147f1864b57d0dfb9ad89afc44f75589c" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.046084 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 09:26:03 crc kubenswrapper[4982]: E0123 09:26:03.050527 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2901cbe-e097-440a-b07d-0e8f068da7a2" containerName="mariadb-client" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.050722 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2901cbe-e097-440a-b07d-0e8f068da7a2" containerName="mariadb-client" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.058273 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2901cbe-e097-440a-b07d-0e8f068da7a2" containerName="mariadb-client" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.060310 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.067023 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tjx5p" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.068896 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.073232 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.074619 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.075187 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.075417 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.082565 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.100253 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.101802 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.107349 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.124069 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.144386 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.206214 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.206619 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14033a67-7424-45ac-8ec1-3230fa3d6672-config\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.206931 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.207438 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.207651 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14033a67-7424-45ac-8ec1-3230fa3d6672-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.207818 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.207994 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c94a864-3d02-4070-a3e8-23791aef5c40-config\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.208276 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c94a864-3d02-4070-a3e8-23791aef5c40-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.208725 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.208800 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.208966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.208998 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.209051 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9944\" (UniqueName: \"kubernetes.io/projected/9c94a864-3d02-4070-a3e8-23791aef5c40-kube-api-access-z9944\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.209096 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c94a864-3d02-4070-a3e8-23791aef5c40-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.209186 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xvv\" (UniqueName: \"kubernetes.io/projected/14033a67-7424-45ac-8ec1-3230fa3d6672-kube-api-access-n2xvv\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.209262 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14033a67-7424-45ac-8ec1-3230fa3d6672-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.310826 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c94a864-3d02-4070-a3e8-23791aef5c40-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.310895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.310919 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.310976 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311001 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311026 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311053 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9944\" (UniqueName: \"kubernetes.io/projected/9c94a864-3d02-4070-a3e8-23791aef5c40-kube-api-access-z9944\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311076 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c94a864-3d02-4070-a3e8-23791aef5c40-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-247b9289-bb57-489f-9277-28685f011601\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247b9289-bb57-489f-9277-28685f011601\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xvv\" (UniqueName: \"kubernetes.io/projected/14033a67-7424-45ac-8ec1-3230fa3d6672-kube-api-access-n2xvv\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311214 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14033a67-7424-45ac-8ec1-3230fa3d6672-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311239 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpjzz\" (UniqueName: \"kubernetes.io/projected/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-kube-api-access-fpjzz\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311315 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14033a67-7424-45ac-8ec1-3230fa3d6672-config\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311346 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311381 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311410 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311435 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-config\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311461 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14033a67-7424-45ac-8ec1-3230fa3d6672-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311485 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311507 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c94a864-3d02-4070-a3e8-23791aef5c40-config\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311533 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.311574 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.312058 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c94a864-3d02-4070-a3e8-23791aef5c40-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.313188 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14033a67-7424-45ac-8ec1-3230fa3d6672-config\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.314058 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c94a864-3d02-4070-a3e8-23791aef5c40-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.314499 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14033a67-7424-45ac-8ec1-3230fa3d6672-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.314736 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.314767 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/14037bae73861bcb00793878ddd4bb1cb9c2e578d079ad8393adb8aa67c5aa94/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.315099 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14033a67-7424-45ac-8ec1-3230fa3d6672-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.316191 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.316220 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8556e9729424805422c3bfc844d8a3fba0fd183e9c3675ea4dab2f1882ae7ad2/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.316347 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c94a864-3d02-4070-a3e8-23791aef5c40-config\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.318035 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.319232 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.319792 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.323371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.325298 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c94a864-3d02-4070-a3e8-23791aef5c40-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.329500 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14033a67-7424-45ac-8ec1-3230fa3d6672-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.332777 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9944\" (UniqueName: \"kubernetes.io/projected/9c94a864-3d02-4070-a3e8-23791aef5c40-kube-api-access-z9944\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.340375 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xvv\" (UniqueName: \"kubernetes.io/projected/14033a67-7424-45ac-8ec1-3230fa3d6672-kube-api-access-n2xvv\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.351797 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-960295b7-26b9-4c5c-a0a7-48efc7168a25\") pod \"ovsdbserver-nb-0\" (UID: \"9c94a864-3d02-4070-a3e8-23791aef5c40\") " pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.357229 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-311998c6-d2a8-40b2-b17f-f841c497e93b\") pod \"ovsdbserver-nb-2\" (UID: \"14033a67-7424-45ac-8ec1-3230fa3d6672\") " pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.387110 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.404314 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.413691 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-247b9289-bb57-489f-9277-28685f011601\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247b9289-bb57-489f-9277-28685f011601\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.413775 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.413813 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpjzz\" (UniqueName: \"kubernetes.io/projected/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-kube-api-access-fpjzz\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.413879 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.413907 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-config\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.413941 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.413979 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.414058 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.414689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.416348 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.416895 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-config\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.418144 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.418169 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-247b9289-bb57-489f-9277-28685f011601\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247b9289-bb57-489f-9277-28685f011601\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/508a363557ecb0804d4e31c535aed9fa2e7deeb2b68a65803a82dcf8b0ed91e3/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.419573 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.420018 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.425923 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.441110 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpjzz\" (UniqueName: \"kubernetes.io/projected/19d8b73a-7c62-4cf2-ba91-87a7ceb346ed-kube-api-access-fpjzz\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.474889 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-247b9289-bb57-489f-9277-28685f011601\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247b9289-bb57-489f-9277-28685f011601\") pod \"ovsdbserver-nb-1\" (UID: \"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed\") " pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.521706 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.529737 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.530880 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.532433 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.532829 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.533414 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-phvw9" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.533603 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.537471 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.544093 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.546933 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.573907 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.595014 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.607656 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.717364 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718075 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718183 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718341 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718449 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697244a8-6ee6-40b8-8575-3ecc60e5b34b-config\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718538 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718759 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.718955 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719050 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719120 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/697244a8-6ee6-40b8-8575-3ecc60e5b34b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719203 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7csbg\" (UniqueName: \"kubernetes.io/projected/697244a8-6ee6-40b8-8575-3ecc60e5b34b-kube-api-access-7csbg\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-config\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719362 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719480 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvv8g\" (UniqueName: \"kubernetes.io/projected/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-kube-api-access-bvv8g\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719551 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719668 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/697244a8-6ee6-40b8-8575-3ecc60e5b34b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719745 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719813 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28a47893-cd03-4576-9792-082b484dc7bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a47893-cd03-4576-9792-082b484dc7bd\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719884 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-config\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.719984 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.720056 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.720123 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.720186 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.720262 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2mzr\" (UniqueName: \"kubernetes.io/projected/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-kube-api-access-v2mzr\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823000 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823062 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823102 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823132 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823160 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/697244a8-6ee6-40b8-8575-3ecc60e5b34b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823191 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7csbg\" (UniqueName: \"kubernetes.io/projected/697244a8-6ee6-40b8-8575-3ecc60e5b34b-kube-api-access-7csbg\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823220 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-config\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823236 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823263 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvv8g\" (UniqueName: \"kubernetes.io/projected/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-kube-api-access-bvv8g\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823317 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/697244a8-6ee6-40b8-8575-3ecc60e5b34b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823348 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823368 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28a47893-cd03-4576-9792-082b484dc7bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a47893-cd03-4576-9792-082b484dc7bd\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823415 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-config\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823442 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823461 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823484 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.823911 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.824033 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2mzr\" (UniqueName: \"kubernetes.io/projected/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-kube-api-access-v2mzr\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.824185 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.824257 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.824323 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.824693 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/697244a8-6ee6-40b8-8575-3ecc60e5b34b-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.825343 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697244a8-6ee6-40b8-8575-3ecc60e5b34b-config\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.825386 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.826074 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.830860 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.830916 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.830959 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/931e031ade0ee571d87381888fa3b4df951abc09621471d350aab3684c87147e/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.831001 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.831499 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-config\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.833982 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.834768 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/697244a8-6ee6-40b8-8575-3ecc60e5b34b-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.835198 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.835673 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.836537 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.837346 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-config\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.838040 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697244a8-6ee6-40b8-8575-3ecc60e5b34b-config\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.838520 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.838761 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.840133 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.840162 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.841242 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/697244a8-6ee6-40b8-8575-3ecc60e5b34b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.844742 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.845392 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.845425 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/79330a3e0a57ec1806553b965bbfec235ed41a814d1c8f4d196cb430c53ac79d/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.845875 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.845919 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28a47893-cd03-4576-9792-082b484dc7bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a47893-cd03-4576-9792-082b484dc7bd\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8e26ebae6f1c3c06bf48fc490c071aea7002276772045f954172776cc99ed74a/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.859287 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7csbg\" (UniqueName: \"kubernetes.io/projected/697244a8-6ee6-40b8-8575-3ecc60e5b34b-kube-api-access-7csbg\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.861028 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvv8g\" (UniqueName: \"kubernetes.io/projected/eaf53cb7-e27b-4fee-be64-1cb37bdda7cf-kube-api-access-bvv8g\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.865985 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2mzr\" (UniqueName: \"kubernetes.io/projected/038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1-kube-api-access-v2mzr\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.886128 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d0cc477b-7e22-49ba-83b8-83afb03fb140\") pod \"ovsdbserver-sb-2\" (UID: \"697244a8-6ee6-40b8-8575-3ecc60e5b34b\") " pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.901762 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28a47893-cd03-4576-9792-082b484dc7bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28a47893-cd03-4576-9792-082b484dc7bd\") pod \"ovsdbserver-sb-1\" (UID: \"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1\") " pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.907950 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dcf10add-71c5-49c5-8341-e34c64a4d23f\") pod \"ovsdbserver-sb-0\" (UID: \"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf\") " pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:03 crc kubenswrapper[4982]: I0123 09:26:03.913151 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.038338 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.188287 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.197726 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.312894 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.436131 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 23 09:26:04 crc kubenswrapper[4982]: W0123 09:26:04.460555 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod697244a8_6ee6_40b8_8575_3ecc60e5b34b.slice/crio-04103100a75db80d605cd20e4ba1f5eb17e0d42b3976beb92bab233fe6006633 WatchSource:0}: Error finding container 04103100a75db80d605cd20e4ba1f5eb17e0d42b3976beb92bab233fe6006633: Status 404 returned error can't find the container with id 04103100a75db80d605cd20e4ba1f5eb17e0d42b3976beb92bab233fe6006633 Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.474760 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"14033a67-7424-45ac-8ec1-3230fa3d6672","Type":"ContainerStarted","Data":"c785ac90b870d233af1e3c8eb74060baef6dc4eb8b15806aa34557538a3cf9b0"} Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.474803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"14033a67-7424-45ac-8ec1-3230fa3d6672","Type":"ContainerStarted","Data":"7a7c9ad9ffa96716d10571d7dc38096a5d7a441b1efda2e93cfc8ef8ecea11ec"} Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.475826 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed","Type":"ContainerStarted","Data":"312b8aad4220585fd71b909841987812ac678cda23e06691e812596f0c28df6d"} Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.476854 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"697244a8-6ee6-40b8-8575-3ecc60e5b34b","Type":"ContainerStarted","Data":"04103100a75db80d605cd20e4ba1f5eb17e0d42b3976beb92bab233fe6006633"} Jan 23 09:26:04 crc kubenswrapper[4982]: W0123 09:26:04.763766 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c94a864_3d02_4070_a3e8_23791aef5c40.slice/crio-b0ca1aada22a0075d1bf0f15e285375bb700f89b685e7bbed96e0f06f8c1b44f WatchSource:0}: Error finding container b0ca1aada22a0075d1bf0f15e285375bb700f89b685e7bbed96e0f06f8c1b44f: Status 404 returned error can't find the container with id b0ca1aada22a0075d1bf0f15e285375bb700f89b685e7bbed96e0f06f8c1b44f Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.766844 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 09:26:04 crc kubenswrapper[4982]: W0123 09:26:04.867009 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaf53cb7_e27b_4fee_be64_1cb37bdda7cf.slice/crio-d3c61a75216b64396fc0c5d55bfb6a465627d7573a6a58f9825834f7b892dc85 WatchSource:0}: Error finding container d3c61a75216b64396fc0c5d55bfb6a465627d7573a6a58f9825834f7b892dc85: Status 404 returned error can't find the container with id d3c61a75216b64396fc0c5d55bfb6a465627d7573a6a58f9825834f7b892dc85 Jan 23 09:26:04 crc kubenswrapper[4982]: I0123 09:26:04.870950 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.485356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"697244a8-6ee6-40b8-8575-3ecc60e5b34b","Type":"ContainerStarted","Data":"1d9ad742831d13a24caabe4cd0befa08a5c97ce47b670e73572596fa401ec786"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.485685 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"697244a8-6ee6-40b8-8575-3ecc60e5b34b","Type":"ContainerStarted","Data":"e9496dc48b61268eaf2139006d3f9622d01da6f65ac9b15f636c1f54e2337260"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.487839 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"14033a67-7424-45ac-8ec1-3230fa3d6672","Type":"ContainerStarted","Data":"4543f5e2b378d2a1fe25037ccd1c7012a52c9ca566be1f133595b99fbcf2ec11"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.490380 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf","Type":"ContainerStarted","Data":"a2389b1033f3aabc8daaa8fd1dd8938bf517ab3ebc8efa4af7ccb49894e2f345"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.490410 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf","Type":"ContainerStarted","Data":"1c75480746c1a5d18b7303917dd9a310fcc408247e8c8a2e530fc9147ab5e169"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.490429 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"eaf53cb7-e27b-4fee-be64-1cb37bdda7cf","Type":"ContainerStarted","Data":"d3c61a75216b64396fc0c5d55bfb6a465627d7573a6a58f9825834f7b892dc85"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.492451 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed","Type":"ContainerStarted","Data":"000b0d767141e155f321caf32f746a79a669fbf724d2645c9dbac996f23caa1c"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.492478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"19d8b73a-7c62-4cf2-ba91-87a7ceb346ed","Type":"ContainerStarted","Data":"5c5cea9fe2387c2a132c4dff540dc059f568ff499c0deb4bc1a6b534e2a0cb33"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.494179 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9c94a864-3d02-4070-a3e8-23791aef5c40","Type":"ContainerStarted","Data":"b5f030206f4ba82c786c2745c46e3117f7638ac1877496a9cd7fccca8985c59a"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.494209 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9c94a864-3d02-4070-a3e8-23791aef5c40","Type":"ContainerStarted","Data":"339c33f62caacabfff87c386565e7edeb4e6e703201671eb9100b529a1c77b8f"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.494222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9c94a864-3d02-4070-a3e8-23791aef5c40","Type":"ContainerStarted","Data":"b0ca1aada22a0075d1bf0f15e285375bb700f89b685e7bbed96e0f06f8c1b44f"} Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.512927 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.512906495 podStartE2EDuration="3.512906495s" podCreationTimestamp="2026-01-23 09:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:05.505106744 +0000 UTC m=+5439.985470130" watchObservedRunningTime="2026-01-23 09:26:05.512906495 +0000 UTC m=+5439.993269881" Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.546873 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.546850903 podStartE2EDuration="3.546850903s" podCreationTimestamp="2026-01-23 09:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:05.525363232 +0000 UTC m=+5440.005726618" watchObservedRunningTime="2026-01-23 09:26:05.546850903 +0000 UTC m=+5440.027214299" Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.570270 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.5702509559999998 podStartE2EDuration="3.570250956s" podCreationTimestamp="2026-01-23 09:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:05.568611842 +0000 UTC m=+5440.048975228" watchObservedRunningTime="2026-01-23 09:26:05.570250956 +0000 UTC m=+5440.050614342" Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.571105 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.571098679 podStartE2EDuration="3.571098679s" podCreationTimestamp="2026-01-23 09:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:05.554830049 +0000 UTC m=+5440.035193455" watchObservedRunningTime="2026-01-23 09:26:05.571098679 +0000 UTC m=+5440.051462065" Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.601607 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.601547683 podStartE2EDuration="3.601547683s" podCreationTimestamp="2026-01-23 09:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:05.591493561 +0000 UTC m=+5440.071856957" watchObservedRunningTime="2026-01-23 09:26:05.601547683 +0000 UTC m=+5440.081911069" Jan 23 09:26:05 crc kubenswrapper[4982]: I0123 09:26:05.840082 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 23 09:26:05 crc kubenswrapper[4982]: W0123 09:26:05.843794 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod038a60c8_ea6e_4f87_bc2a_a0851cc2fdb1.slice/crio-9da6d344d3e4dd520459daf59411155ba146b13b82509e8b2dc87376236832d9 WatchSource:0}: Error finding container 9da6d344d3e4dd520459daf59411155ba146b13b82509e8b2dc87376236832d9: Status 404 returned error can't find the container with id 9da6d344d3e4dd520459daf59411155ba146b13b82509e8b2dc87376236832d9 Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.387832 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.405402 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.503860 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1","Type":"ContainerStarted","Data":"22cc08ea9b2e941f56302ee482cfdf79f12112d63492e149ec00191ee95322ad"} Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.505053 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1","Type":"ContainerStarted","Data":"d4ba8ffa219c6e265196cdeb8ac5ffe6191b2a77d14fb2121d797b79e491eb94"} Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.505087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1","Type":"ContainerStarted","Data":"9da6d344d3e4dd520459daf59411155ba146b13b82509e8b2dc87376236832d9"} Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.532419 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.53240057 podStartE2EDuration="4.53240057s" podCreationTimestamp="2026-01-23 09:26:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:06.523785627 +0000 UTC m=+5441.004149013" watchObservedRunningTime="2026-01-23 09:26:06.53240057 +0000 UTC m=+5441.012763956" Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.717824 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:06 crc kubenswrapper[4982]: I0123 09:26:06.913713 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:07 crc kubenswrapper[4982]: I0123 09:26:07.189032 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:07 crc kubenswrapper[4982]: I0123 09:26:07.198039 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:08 crc kubenswrapper[4982]: I0123 09:26:08.388554 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:08 crc kubenswrapper[4982]: I0123 09:26:08.404808 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:08 crc kubenswrapper[4982]: I0123 09:26:08.717886 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:08 crc kubenswrapper[4982]: I0123 09:26:08.914105 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.189471 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.198044 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.439989 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.466588 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.565358 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.575300 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.769643 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f6555dd7c-vwtcj"] Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.771557 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.791158 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.803277 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.810070 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f6555dd7c-vwtcj"] Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.857267 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.871508 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-dns-svc\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.871584 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvxc\" (UniqueName: \"kubernetes.io/projected/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-kube-api-access-wjvxc\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.871637 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-config\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.871653 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-ovsdbserver-nb\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.973875 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-config\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.974194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-ovsdbserver-nb\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.974461 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-dns-svc\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.974511 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjvxc\" (UniqueName: \"kubernetes.io/projected/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-kube-api-access-wjvxc\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.975742 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-ovsdbserver-nb\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.975991 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.978837 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-dns-svc\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:09 crc kubenswrapper[4982]: I0123 09:26:09.980101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-config\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.006221 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjvxc\" (UniqueName: \"kubernetes.io/projected/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-kube-api-access-wjvxc\") pod \"dnsmasq-dns-7f6555dd7c-vwtcj\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.036037 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.124470 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.248820 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.249971 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.315134 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.379174 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f6555dd7c-vwtcj"] Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.399758 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cccd8794f-t4xzg"] Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.402988 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.406008 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.414086 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cccd8794f-t4xzg"] Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.491065 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-config\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.491154 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-sb\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.491257 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlh4d\" (UniqueName: \"kubernetes.io/projected/c8cb0abd-1dea-41e4-be24-980586c0ae1c-kube-api-access-hlh4d\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.491371 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-nb\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.491440 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-dns-svc\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.592755 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-config\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.594178 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-config\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.595370 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-sb\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.595412 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlh4d\" (UniqueName: \"kubernetes.io/projected/c8cb0abd-1dea-41e4-be24-980586c0ae1c-kube-api-access-hlh4d\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.595730 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-nb\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.595819 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-dns-svc\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.596725 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-nb\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.596817 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-sb\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.597579 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-dns-svc\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.626808 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlh4d\" (UniqueName: \"kubernetes.io/projected/c8cb0abd-1dea-41e4-be24-980586c0ae1c-kube-api-access-hlh4d\") pod \"dnsmasq-dns-7cccd8794f-t4xzg\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.729227 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f6555dd7c-vwtcj"] Jan 23 09:26:10 crc kubenswrapper[4982]: I0123 09:26:10.740797 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.145735 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9xgc8"] Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.148393 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.157362 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xgc8"] Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.198677 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cccd8794f-t4xzg"] Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.336346 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnz7c\" (UniqueName: \"kubernetes.io/projected/314f8c0e-991b-42dc-802c-1d2c9dd45b81-kube-api-access-gnz7c\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.336399 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-catalog-content\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.337145 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-utilities\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.438090 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-utilities\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.438411 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnz7c\" (UniqueName: \"kubernetes.io/projected/314f8c0e-991b-42dc-802c-1d2c9dd45b81-kube-api-access-gnz7c\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.438525 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-catalog-content\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.438646 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-utilities\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.439009 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-catalog-content\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.462620 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnz7c\" (UniqueName: \"kubernetes.io/projected/314f8c0e-991b-42dc-802c-1d2c9dd45b81-kube-api-access-gnz7c\") pod \"community-operators-9xgc8\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.543975 4982 generic.go:334] "Generic (PLEG): container finished" podID="e6e53f57-bd9d-40e3-a546-55f7cd6f5196" containerID="d275cff33eac72e7fa11c01cdeab85a50e41f92d269f7e56e576ea3a0acd7c98" exitCode=0 Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.544050 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" event={"ID":"e6e53f57-bd9d-40e3-a546-55f7cd6f5196","Type":"ContainerDied","Data":"d275cff33eac72e7fa11c01cdeab85a50e41f92d269f7e56e576ea3a0acd7c98"} Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.545235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" event={"ID":"e6e53f57-bd9d-40e3-a546-55f7cd6f5196","Type":"ContainerStarted","Data":"3d4efea3a9ba893088b6017e9be2e6d36e62da8ce8e760bab7311563387c76fe"} Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.558050 4982 generic.go:334] "Generic (PLEG): container finished" podID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerID="cc288a56defdeebe3613f7acf1676a70a3e08336d0ad6c4b34bb3fb36dfca46b" exitCode=0 Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.558121 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" event={"ID":"c8cb0abd-1dea-41e4-be24-980586c0ae1c","Type":"ContainerDied","Data":"cc288a56defdeebe3613f7acf1676a70a3e08336d0ad6c4b34bb3fb36dfca46b"} Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.558148 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" event={"ID":"c8cb0abd-1dea-41e4-be24-980586c0ae1c","Type":"ContainerStarted","Data":"e2c0899be908f6de8c680ca78bdb9c22ccec6e69e2726f37569a29490587abaf"} Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.615089 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:11 crc kubenswrapper[4982]: I0123 09:26:11.871990 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.051372 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjvxc\" (UniqueName: \"kubernetes.io/projected/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-kube-api-access-wjvxc\") pod \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.051506 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-config\") pod \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.051659 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-dns-svc\") pod \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.051701 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-ovsdbserver-nb\") pod \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\" (UID: \"e6e53f57-bd9d-40e3-a546-55f7cd6f5196\") " Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.056557 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-kube-api-access-wjvxc" (OuterVolumeSpecName: "kube-api-access-wjvxc") pod "e6e53f57-bd9d-40e3-a546-55f7cd6f5196" (UID: "e6e53f57-bd9d-40e3-a546-55f7cd6f5196"). InnerVolumeSpecName "kube-api-access-wjvxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.073930 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e6e53f57-bd9d-40e3-a546-55f7cd6f5196" (UID: "e6e53f57-bd9d-40e3-a546-55f7cd6f5196"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.079283 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-config" (OuterVolumeSpecName: "config") pod "e6e53f57-bd9d-40e3-a546-55f7cd6f5196" (UID: "e6e53f57-bd9d-40e3-a546-55f7cd6f5196"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.080268 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e6e53f57-bd9d-40e3-a546-55f7cd6f5196" (UID: "e6e53f57-bd9d-40e3-a546-55f7cd6f5196"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.149153 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xgc8"] Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.154739 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.154775 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.154785 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.154795 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjvxc\" (UniqueName: \"kubernetes.io/projected/e6e53f57-bd9d-40e3-a546-55f7cd6f5196-kube-api-access-wjvxc\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.568487 4982 generic.go:334] "Generic (PLEG): container finished" podID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerID="75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b" exitCode=0 Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.568573 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xgc8" event={"ID":"314f8c0e-991b-42dc-802c-1d2c9dd45b81","Type":"ContainerDied","Data":"75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b"} Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.568608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xgc8" event={"ID":"314f8c0e-991b-42dc-802c-1d2c9dd45b81","Type":"ContainerStarted","Data":"7c061f7d7c98a244a0bfd695eeb970635234efd7b8eb88bce05fda58b3fd4dfc"} Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.570482 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" event={"ID":"e6e53f57-bd9d-40e3-a546-55f7cd6f5196","Type":"ContainerDied","Data":"3d4efea3a9ba893088b6017e9be2e6d36e62da8ce8e760bab7311563387c76fe"} Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.570506 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f6555dd7c-vwtcj" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.570525 4982 scope.go:117] "RemoveContainer" containerID="d275cff33eac72e7fa11c01cdeab85a50e41f92d269f7e56e576ea3a0acd7c98" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.578812 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" event={"ID":"c8cb0abd-1dea-41e4-be24-980586c0ae1c","Type":"ContainerStarted","Data":"e35c940fbd57ef53feef54ce99b2b8ad779048ed1b110b5bc44fafd571dc030b"} Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.579994 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.627989 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" podStartSLOduration=2.627968285 podStartE2EDuration="2.627968285s" podCreationTimestamp="2026-01-23 09:26:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:12.619502886 +0000 UTC m=+5447.099866272" watchObservedRunningTime="2026-01-23 09:26:12.627968285 +0000 UTC m=+5447.108331671" Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.658821 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f6555dd7c-vwtcj"] Jan 23 09:26:12 crc kubenswrapper[4982]: I0123 09:26:12.666778 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f6555dd7c-vwtcj"] Jan 23 09:26:13 crc kubenswrapper[4982]: I0123 09:26:13.588640 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xgc8" event={"ID":"314f8c0e-991b-42dc-802c-1d2c9dd45b81","Type":"ContainerStarted","Data":"fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6"} Jan 23 09:26:13 crc kubenswrapper[4982]: I0123 09:26:13.841170 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e53f57-bd9d-40e3-a546-55f7cd6f5196" path="/var/lib/kubelet/pods/e6e53f57-bd9d-40e3-a546-55f7cd6f5196/volumes" Jan 23 09:26:14 crc kubenswrapper[4982]: I0123 09:26:14.236556 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 23 09:26:14 crc kubenswrapper[4982]: I0123 09:26:14.615832 4982 generic.go:334] "Generic (PLEG): container finished" podID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerID="fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6" exitCode=0 Jan 23 09:26:14 crc kubenswrapper[4982]: I0123 09:26:14.615919 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xgc8" event={"ID":"314f8c0e-991b-42dc-802c-1d2c9dd45b81","Type":"ContainerDied","Data":"fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6"} Jan 23 09:26:15 crc kubenswrapper[4982]: I0123 09:26:15.626852 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xgc8" event={"ID":"314f8c0e-991b-42dc-802c-1d2c9dd45b81","Type":"ContainerStarted","Data":"be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b"} Jan 23 09:26:15 crc kubenswrapper[4982]: I0123 09:26:15.648684 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9xgc8" podStartSLOduration=2.120431515 podStartE2EDuration="4.648663554s" podCreationTimestamp="2026-01-23 09:26:11 +0000 UTC" firstStartedPulling="2026-01-23 09:26:12.570527281 +0000 UTC m=+5447.050890667" lastFinishedPulling="2026-01-23 09:26:15.09875928 +0000 UTC m=+5449.579122706" observedRunningTime="2026-01-23 09:26:15.64409056 +0000 UTC m=+5450.124453936" watchObservedRunningTime="2026-01-23 09:26:15.648663554 +0000 UTC m=+5450.129026940" Jan 23 09:26:16 crc kubenswrapper[4982]: I0123 09:26:16.862982 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 23 09:26:16 crc kubenswrapper[4982]: E0123 09:26:16.863711 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e53f57-bd9d-40e3-a546-55f7cd6f5196" containerName="init" Jan 23 09:26:16 crc kubenswrapper[4982]: I0123 09:26:16.863731 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e53f57-bd9d-40e3-a546-55f7cd6f5196" containerName="init" Jan 23 09:26:16 crc kubenswrapper[4982]: I0123 09:26:16.863999 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e53f57-bd9d-40e3-a546-55f7cd6f5196" containerName="init" Jan 23 09:26:16 crc kubenswrapper[4982]: I0123 09:26:16.864759 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 23 09:26:16 crc kubenswrapper[4982]: I0123 09:26:16.872570 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 23 09:26:16 crc kubenswrapper[4982]: I0123 09:26:16.885141 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.046178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqss2\" (UniqueName: \"kubernetes.io/projected/47974c2a-39ba-4bbc-95ee-c27fda8fdc9c-kube-api-access-lqss2\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.046246 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/47974c2a-39ba-4bbc-95ee-c27fda8fdc9c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.046341 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.148870 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/47974c2a-39ba-4bbc-95ee-c27fda8fdc9c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.148979 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.149208 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqss2\" (UniqueName: \"kubernetes.io/projected/47974c2a-39ba-4bbc-95ee-c27fda8fdc9c-kube-api-access-lqss2\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.152920 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.152972 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ea506de6c489fb435a359a27969a603e9b4c59ba2aeb129969598fc137b0c7d/globalmount\"" pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.158990 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/47974c2a-39ba-4bbc-95ee-c27fda8fdc9c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.167301 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqss2\" (UniqueName: \"kubernetes.io/projected/47974c2a-39ba-4bbc-95ee-c27fda8fdc9c-kube-api-access-lqss2\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.193185 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ef7f660-b5a9-4cf4-9bf0-4dae8601c969\") pod \"ovn-copy-data\" (UID: \"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c\") " pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.486937 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 23 09:26:17 crc kubenswrapper[4982]: I0123 09:26:17.975301 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 23 09:26:18 crc kubenswrapper[4982]: I0123 09:26:18.656786 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c","Type":"ContainerStarted","Data":"4cf5c713524464e3181fc0db64d5a67c4fcee50875c81c6056247419eb9f0da8"} Jan 23 09:26:19 crc kubenswrapper[4982]: I0123 09:26:19.670703 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"47974c2a-39ba-4bbc-95ee-c27fda8fdc9c","Type":"ContainerStarted","Data":"be63376e4514c584447089bb79e5ba58fcfb71ab415d99ce09b995878204f52b"} Jan 23 09:26:19 crc kubenswrapper[4982]: I0123 09:26:19.694139 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=4.149139911 podStartE2EDuration="4.694121281s" podCreationTimestamp="2026-01-23 09:26:15 +0000 UTC" firstStartedPulling="2026-01-23 09:26:17.989549037 +0000 UTC m=+5452.469912423" lastFinishedPulling="2026-01-23 09:26:18.534530397 +0000 UTC m=+5453.014893793" observedRunningTime="2026-01-23 09:26:19.689741202 +0000 UTC m=+5454.170104578" watchObservedRunningTime="2026-01-23 09:26:19.694121281 +0000 UTC m=+5454.174484667" Jan 23 09:26:20 crc kubenswrapper[4982]: I0123 09:26:20.741825 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:20 crc kubenswrapper[4982]: I0123 09:26:20.798932 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-4284v"] Jan 23 09:26:20 crc kubenswrapper[4982]: I0123 09:26:20.799208 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-4284v" podUID="13a750f3-a134-414b-bc76-db62466c591a" containerName="dnsmasq-dns" containerID="cri-o://e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50" gracePeriod=10 Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.349487 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.385120 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-dns-svc\") pod \"13a750f3-a134-414b-bc76-db62466c591a\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.385217 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f57f9\" (UniqueName: \"kubernetes.io/projected/13a750f3-a134-414b-bc76-db62466c591a-kube-api-access-f57f9\") pod \"13a750f3-a134-414b-bc76-db62466c591a\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.385303 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-config\") pod \"13a750f3-a134-414b-bc76-db62466c591a\" (UID: \"13a750f3-a134-414b-bc76-db62466c591a\") " Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.391889 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a750f3-a134-414b-bc76-db62466c591a-kube-api-access-f57f9" (OuterVolumeSpecName: "kube-api-access-f57f9") pod "13a750f3-a134-414b-bc76-db62466c591a" (UID: "13a750f3-a134-414b-bc76-db62466c591a"). InnerVolumeSpecName "kube-api-access-f57f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.433143 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13a750f3-a134-414b-bc76-db62466c591a" (UID: "13a750f3-a134-414b-bc76-db62466c591a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.464289 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-config" (OuterVolumeSpecName: "config") pod "13a750f3-a134-414b-bc76-db62466c591a" (UID: "13a750f3-a134-414b-bc76-db62466c591a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.487421 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.487461 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a750f3-a134-414b-bc76-db62466c591a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.487474 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f57f9\" (UniqueName: \"kubernetes.io/projected/13a750f3-a134-414b-bc76-db62466c591a-kube-api-access-f57f9\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.615864 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.615921 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.663325 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.690410 4982 generic.go:334] "Generic (PLEG): container finished" podID="13a750f3-a134-414b-bc76-db62466c591a" containerID="e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50" exitCode=0 Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.690757 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-4284v" event={"ID":"13a750f3-a134-414b-bc76-db62466c591a","Type":"ContainerDied","Data":"e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50"} Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.690794 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-4284v" event={"ID":"13a750f3-a134-414b-bc76-db62466c591a","Type":"ContainerDied","Data":"cef7b8c93ba54f9784740a6d940550a41b8a869e290c4c2224f901189f4f94d8"} Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.690816 4982 scope.go:117] "RemoveContainer" containerID="e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.690828 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-4284v" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.723806 4982 scope.go:117] "RemoveContainer" containerID="968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.732050 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-4284v"] Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.739088 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-4284v"] Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.745243 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.747664 4982 scope.go:117] "RemoveContainer" containerID="e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50" Jan 23 09:26:21 crc kubenswrapper[4982]: E0123 09:26:21.748136 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50\": container with ID starting with e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50 not found: ID does not exist" containerID="e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.748168 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50"} err="failed to get container status \"e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50\": rpc error: code = NotFound desc = could not find container \"e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50\": container with ID starting with e4da3ee8d457b2af311c5bfdb9d1bb113cfc5807984ba94f7cbf79468d38dc50 not found: ID does not exist" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.748193 4982 scope.go:117] "RemoveContainer" containerID="968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e" Jan 23 09:26:21 crc kubenswrapper[4982]: E0123 09:26:21.748773 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e\": container with ID starting with 968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e not found: ID does not exist" containerID="968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.748803 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e"} err="failed to get container status \"968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e\": rpc error: code = NotFound desc = could not find container \"968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e\": container with ID starting with 968f35c1244dbb1cebf13e0ad91006a236d912efb703e424f563a435b650578e not found: ID does not exist" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.837915 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a750f3-a134-414b-bc76-db62466c591a" path="/var/lib/kubelet/pods/13a750f3-a134-414b-bc76-db62466c591a/volumes" Jan 23 09:26:21 crc kubenswrapper[4982]: I0123 09:26:21.899466 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xgc8"] Jan 23 09:26:23 crc kubenswrapper[4982]: I0123 09:26:23.706390 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9xgc8" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="registry-server" containerID="cri-o://be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b" gracePeriod=2 Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.264305 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.448515 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-utilities\") pod \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.448615 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnz7c\" (UniqueName: \"kubernetes.io/projected/314f8c0e-991b-42dc-802c-1d2c9dd45b81-kube-api-access-gnz7c\") pod \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.448828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-catalog-content\") pod \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\" (UID: \"314f8c0e-991b-42dc-802c-1d2c9dd45b81\") " Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.449689 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-utilities" (OuterVolumeSpecName: "utilities") pod "314f8c0e-991b-42dc-802c-1d2c9dd45b81" (UID: "314f8c0e-991b-42dc-802c-1d2c9dd45b81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.462938 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/314f8c0e-991b-42dc-802c-1d2c9dd45b81-kube-api-access-gnz7c" (OuterVolumeSpecName: "kube-api-access-gnz7c") pod "314f8c0e-991b-42dc-802c-1d2c9dd45b81" (UID: "314f8c0e-991b-42dc-802c-1d2c9dd45b81"). InnerVolumeSpecName "kube-api-access-gnz7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.502523 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "314f8c0e-991b-42dc-802c-1d2c9dd45b81" (UID: "314f8c0e-991b-42dc-802c-1d2c9dd45b81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.550414 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.550443 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnz7c\" (UniqueName: \"kubernetes.io/projected/314f8c0e-991b-42dc-802c-1d2c9dd45b81-kube-api-access-gnz7c\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.550452 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/314f8c0e-991b-42dc-802c-1d2c9dd45b81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.718449 4982 generic.go:334] "Generic (PLEG): container finished" podID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerID="be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b" exitCode=0 Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.718511 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xgc8" event={"ID":"314f8c0e-991b-42dc-802c-1d2c9dd45b81","Type":"ContainerDied","Data":"be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b"} Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.718541 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xgc8" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.718702 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xgc8" event={"ID":"314f8c0e-991b-42dc-802c-1d2c9dd45b81","Type":"ContainerDied","Data":"7c061f7d7c98a244a0bfd695eeb970635234efd7b8eb88bce05fda58b3fd4dfc"} Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.718732 4982 scope.go:117] "RemoveContainer" containerID="be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.752029 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xgc8"] Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.752512 4982 scope.go:117] "RemoveContainer" containerID="fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.762482 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9xgc8"] Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.771673 4982 scope.go:117] "RemoveContainer" containerID="75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.814256 4982 scope.go:117] "RemoveContainer" containerID="be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b" Jan 23 09:26:24 crc kubenswrapper[4982]: E0123 09:26:24.817306 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b\": container with ID starting with be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b not found: ID does not exist" containerID="be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.817386 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b"} err="failed to get container status \"be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b\": rpc error: code = NotFound desc = could not find container \"be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b\": container with ID starting with be903664aff327d2d3374be2784c9b7ec53d17202cea0b6f2de96971acb5732b not found: ID does not exist" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.817441 4982 scope.go:117] "RemoveContainer" containerID="fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6" Jan 23 09:26:24 crc kubenswrapper[4982]: E0123 09:26:24.818084 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6\": container with ID starting with fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6 not found: ID does not exist" containerID="fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.818129 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6"} err="failed to get container status \"fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6\": rpc error: code = NotFound desc = could not find container \"fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6\": container with ID starting with fb4c04329cbb4903a54b9ff77febe25f3a74ec658f2c7da586b88820051f31e6 not found: ID does not exist" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.818181 4982 scope.go:117] "RemoveContainer" containerID="75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b" Jan 23 09:26:24 crc kubenswrapper[4982]: E0123 09:26:24.818555 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b\": container with ID starting with 75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b not found: ID does not exist" containerID="75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b" Jan 23 09:26:24 crc kubenswrapper[4982]: I0123 09:26:24.818609 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b"} err="failed to get container status \"75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b\": rpc error: code = NotFound desc = could not find container \"75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b\": container with ID starting with 75c6eaee6a352aa1e57218855d19a528db8a223c2d96b752dc9db28b4762f45b not found: ID does not exist" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.039137 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 09:26:25 crc kubenswrapper[4982]: E0123 09:26:25.039910 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="extract-content" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.039932 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="extract-content" Jan 23 09:26:25 crc kubenswrapper[4982]: E0123 09:26:25.039965 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a750f3-a134-414b-bc76-db62466c591a" containerName="dnsmasq-dns" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.039974 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a750f3-a134-414b-bc76-db62466c591a" containerName="dnsmasq-dns" Jan 23 09:26:25 crc kubenswrapper[4982]: E0123 09:26:25.039993 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a750f3-a134-414b-bc76-db62466c591a" containerName="init" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.039999 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a750f3-a134-414b-bc76-db62466c591a" containerName="init" Jan 23 09:26:25 crc kubenswrapper[4982]: E0123 09:26:25.040010 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="extract-utilities" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.040016 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="extract-utilities" Jan 23 09:26:25 crc kubenswrapper[4982]: E0123 09:26:25.040036 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="registry-server" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.040043 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="registry-server" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.040249 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a750f3-a134-414b-bc76-db62466c591a" containerName="dnsmasq-dns" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.040267 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" containerName="registry-server" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.041242 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.044037 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4tq2l" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.045073 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.045411 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.049402 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.057311 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.163200 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.163299 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.163338 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tf6\" (UniqueName: \"kubernetes.io/projected/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-kube-api-access-74tf6\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.163361 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.163465 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-scripts\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.163511 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.163600 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-config\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.265739 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-config\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.265852 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.265912 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.265941 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74tf6\" (UniqueName: \"kubernetes.io/projected/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-kube-api-access-74tf6\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.265965 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.265992 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-scripts\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.266009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.266418 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.267141 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-scripts\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.267271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-config\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.271465 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.271498 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.281413 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.288437 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74tf6\" (UniqueName: \"kubernetes.io/projected/0deaa403-a6a8-45d1-8bae-ca5413dc2a82-kube-api-access-74tf6\") pod \"ovn-northd-0\" (UID: \"0deaa403-a6a8-45d1-8bae-ca5413dc2a82\") " pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.372906 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 09:26:25 crc kubenswrapper[4982]: W0123 09:26:25.836293 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0deaa403_a6a8_45d1_8bae_ca5413dc2a82.slice/crio-b6d2950e5ba25cc8d4a26451e61ea2fac098cbea6e5d53af139d88f57cb653f9 WatchSource:0}: Error finding container b6d2950e5ba25cc8d4a26451e61ea2fac098cbea6e5d53af139d88f57cb653f9: Status 404 returned error can't find the container with id b6d2950e5ba25cc8d4a26451e61ea2fac098cbea6e5d53af139d88f57cb653f9 Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.840084 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="314f8c0e-991b-42dc-802c-1d2c9dd45b81" path="/var/lib/kubelet/pods/314f8c0e-991b-42dc-802c-1d2c9dd45b81/volumes" Jan 23 09:26:25 crc kubenswrapper[4982]: I0123 09:26:25.840733 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 09:26:26 crc kubenswrapper[4982]: I0123 09:26:26.738046 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0deaa403-a6a8-45d1-8bae-ca5413dc2a82","Type":"ContainerStarted","Data":"1fe939eb0e1c1c2c6fd2dfca233f9c3ad9c2c84899f3b154715c380a0769d62e"} Jan 23 09:26:26 crc kubenswrapper[4982]: I0123 09:26:26.738547 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 09:26:26 crc kubenswrapper[4982]: I0123 09:26:26.738563 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0deaa403-a6a8-45d1-8bae-ca5413dc2a82","Type":"ContainerStarted","Data":"dcd28078735d5dcd5151119af6a419c8cec27a9304f60b44741214f2dcf266a0"} Jan 23 09:26:26 crc kubenswrapper[4982]: I0123 09:26:26.738575 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0deaa403-a6a8-45d1-8bae-ca5413dc2a82","Type":"ContainerStarted","Data":"b6d2950e5ba25cc8d4a26451e61ea2fac098cbea6e5d53af139d88f57cb653f9"} Jan 23 09:26:26 crc kubenswrapper[4982]: I0123 09:26:26.769226 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.769203739 podStartE2EDuration="2.769203739s" podCreationTimestamp="2026-01-23 09:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:26.764098711 +0000 UTC m=+5461.244462097" watchObservedRunningTime="2026-01-23 09:26:26.769203739 +0000 UTC m=+5461.249567125" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.393112 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jnrtx"] Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.395132 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.410009 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jnrtx"] Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.496080 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-03a3-account-create-update-9nsdf"] Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.497583 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.500319 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.506183 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-03a3-account-create-update-9nsdf"] Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.565513 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-operator-scripts\") pod \"keystone-db-create-jnrtx\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.565585 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfdfj\" (UniqueName: \"kubernetes.io/projected/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-kube-api-access-wfdfj\") pod \"keystone-db-create-jnrtx\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.667436 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmt6q\" (UniqueName: \"kubernetes.io/projected/13511325-c990-4fae-80f3-4784786ca6f1-kube-api-access-wmt6q\") pod \"keystone-03a3-account-create-update-9nsdf\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.667528 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-operator-scripts\") pod \"keystone-db-create-jnrtx\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.667573 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfdfj\" (UniqueName: \"kubernetes.io/projected/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-kube-api-access-wfdfj\") pod \"keystone-db-create-jnrtx\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.667616 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13511325-c990-4fae-80f3-4784786ca6f1-operator-scripts\") pod \"keystone-03a3-account-create-update-9nsdf\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.668512 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-operator-scripts\") pod \"keystone-db-create-jnrtx\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.823480 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13511325-c990-4fae-80f3-4784786ca6f1-operator-scripts\") pod \"keystone-03a3-account-create-update-9nsdf\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.824781 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmt6q\" (UniqueName: \"kubernetes.io/projected/13511325-c990-4fae-80f3-4784786ca6f1-kube-api-access-wmt6q\") pod \"keystone-03a3-account-create-update-9nsdf\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.825086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13511325-c990-4fae-80f3-4784786ca6f1-operator-scripts\") pod \"keystone-03a3-account-create-update-9nsdf\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.849381 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmt6q\" (UniqueName: \"kubernetes.io/projected/13511325-c990-4fae-80f3-4784786ca6f1-kube-api-access-wmt6q\") pod \"keystone-03a3-account-create-update-9nsdf\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:30 crc kubenswrapper[4982]: I0123 09:26:30.854408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfdfj\" (UniqueName: \"kubernetes.io/projected/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-kube-api-access-wfdfj\") pod \"keystone-db-create-jnrtx\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.026726 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.119175 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.482905 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jnrtx"] Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.581355 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-03a3-account-create-update-9nsdf"] Jan 23 09:26:31 crc kubenswrapper[4982]: W0123 09:26:31.591134 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13511325_c990_4fae_80f3_4784786ca6f1.slice/crio-b4ac2a8eae293ae4aade2a50671e4470fdb9b5ea5bad7cee6734011a7ef18952 WatchSource:0}: Error finding container b4ac2a8eae293ae4aade2a50671e4470fdb9b5ea5bad7cee6734011a7ef18952: Status 404 returned error can't find the container with id b4ac2a8eae293ae4aade2a50671e4470fdb9b5ea5bad7cee6734011a7ef18952 Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.843566 4982 generic.go:334] "Generic (PLEG): container finished" podID="6b0dedd0-c75f-423f-9da4-b0deef1e27cf" containerID="867922a928daaf4a68f2b8b4f38630d39af5f3203e4e417a82c2c909023d80b5" exitCode=0 Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.843676 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jnrtx" event={"ID":"6b0dedd0-c75f-423f-9da4-b0deef1e27cf","Type":"ContainerDied","Data":"867922a928daaf4a68f2b8b4f38630d39af5f3203e4e417a82c2c909023d80b5"} Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.843705 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jnrtx" event={"ID":"6b0dedd0-c75f-423f-9da4-b0deef1e27cf","Type":"ContainerStarted","Data":"7f23725245800199786b1199a7ca6cf6c50975f57dbfef3ccf1183a29f28c919"} Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.846060 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-03a3-account-create-update-9nsdf" event={"ID":"13511325-c990-4fae-80f3-4784786ca6f1","Type":"ContainerStarted","Data":"2c768109e125c62aa0ef2f8c52e9255241c126b528466715ba09f7349f0be88c"} Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.846108 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-03a3-account-create-update-9nsdf" event={"ID":"13511325-c990-4fae-80f3-4784786ca6f1","Type":"ContainerStarted","Data":"b4ac2a8eae293ae4aade2a50671e4470fdb9b5ea5bad7cee6734011a7ef18952"} Jan 23 09:26:31 crc kubenswrapper[4982]: I0123 09:26:31.904700 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-03a3-account-create-update-9nsdf" podStartSLOduration=1.904668617 podStartE2EDuration="1.904668617s" podCreationTimestamp="2026-01-23 09:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:31.8748288 +0000 UTC m=+5466.355192196" watchObservedRunningTime="2026-01-23 09:26:31.904668617 +0000 UTC m=+5466.385031993" Jan 23 09:26:32 crc kubenswrapper[4982]: I0123 09:26:32.856487 4982 generic.go:334] "Generic (PLEG): container finished" podID="13511325-c990-4fae-80f3-4784786ca6f1" containerID="2c768109e125c62aa0ef2f8c52e9255241c126b528466715ba09f7349f0be88c" exitCode=0 Jan 23 09:26:32 crc kubenswrapper[4982]: I0123 09:26:32.856597 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-03a3-account-create-update-9nsdf" event={"ID":"13511325-c990-4fae-80f3-4784786ca6f1","Type":"ContainerDied","Data":"2c768109e125c62aa0ef2f8c52e9255241c126b528466715ba09f7349f0be88c"} Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.281755 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.407115 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-operator-scripts\") pod \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.407197 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfdfj\" (UniqueName: \"kubernetes.io/projected/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-kube-api-access-wfdfj\") pod \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\" (UID: \"6b0dedd0-c75f-423f-9da4-b0deef1e27cf\") " Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.408323 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b0dedd0-c75f-423f-9da4-b0deef1e27cf" (UID: "6b0dedd0-c75f-423f-9da4-b0deef1e27cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.413966 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-kube-api-access-wfdfj" (OuterVolumeSpecName: "kube-api-access-wfdfj") pod "6b0dedd0-c75f-423f-9da4-b0deef1e27cf" (UID: "6b0dedd0-c75f-423f-9da4-b0deef1e27cf"). InnerVolumeSpecName "kube-api-access-wfdfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.509286 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfdfj\" (UniqueName: \"kubernetes.io/projected/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-kube-api-access-wfdfj\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.509353 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b0dedd0-c75f-423f-9da4-b0deef1e27cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.869696 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jnrtx" Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.870539 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jnrtx" event={"ID":"6b0dedd0-c75f-423f-9da4-b0deef1e27cf","Type":"ContainerDied","Data":"7f23725245800199786b1199a7ca6cf6c50975f57dbfef3ccf1183a29f28c919"} Jan 23 09:26:33 crc kubenswrapper[4982]: I0123 09:26:33.870569 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f23725245800199786b1199a7ca6cf6c50975f57dbfef3ccf1183a29f28c919" Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.209576 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.326975 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmt6q\" (UniqueName: \"kubernetes.io/projected/13511325-c990-4fae-80f3-4784786ca6f1-kube-api-access-wmt6q\") pod \"13511325-c990-4fae-80f3-4784786ca6f1\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.327615 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13511325-c990-4fae-80f3-4784786ca6f1-operator-scripts\") pod \"13511325-c990-4fae-80f3-4784786ca6f1\" (UID: \"13511325-c990-4fae-80f3-4784786ca6f1\") " Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.328507 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13511325-c990-4fae-80f3-4784786ca6f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13511325-c990-4fae-80f3-4784786ca6f1" (UID: "13511325-c990-4fae-80f3-4784786ca6f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.331942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13511325-c990-4fae-80f3-4784786ca6f1-kube-api-access-wmt6q" (OuterVolumeSpecName: "kube-api-access-wmt6q") pod "13511325-c990-4fae-80f3-4784786ca6f1" (UID: "13511325-c990-4fae-80f3-4784786ca6f1"). InnerVolumeSpecName "kube-api-access-wmt6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.430131 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmt6q\" (UniqueName: \"kubernetes.io/projected/13511325-c990-4fae-80f3-4784786ca6f1-kube-api-access-wmt6q\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.430195 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13511325-c990-4fae-80f3-4784786ca6f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.880581 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-03a3-account-create-update-9nsdf" event={"ID":"13511325-c990-4fae-80f3-4784786ca6f1","Type":"ContainerDied","Data":"b4ac2a8eae293ae4aade2a50671e4470fdb9b5ea5bad7cee6734011a7ef18952"} Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.880618 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ac2a8eae293ae4aade2a50671e4470fdb9b5ea5bad7cee6734011a7ef18952" Jan 23 09:26:34 crc kubenswrapper[4982]: I0123 09:26:34.880688 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-03a3-account-create-update-9nsdf" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.919262 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hmt2w"] Jan 23 09:26:35 crc kubenswrapper[4982]: E0123 09:26:35.919857 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b0dedd0-c75f-423f-9da4-b0deef1e27cf" containerName="mariadb-database-create" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.919873 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b0dedd0-c75f-423f-9da4-b0deef1e27cf" containerName="mariadb-database-create" Jan 23 09:26:35 crc kubenswrapper[4982]: E0123 09:26:35.919891 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13511325-c990-4fae-80f3-4784786ca6f1" containerName="mariadb-account-create-update" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.919897 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="13511325-c990-4fae-80f3-4784786ca6f1" containerName="mariadb-account-create-update" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.920167 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="13511325-c990-4fae-80f3-4784786ca6f1" containerName="mariadb-account-create-update" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.920253 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b0dedd0-c75f-423f-9da4-b0deef1e27cf" containerName="mariadb-database-create" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.920931 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.925735 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.925758 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.925838 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.925909 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7wfj" Jan 23 09:26:35 crc kubenswrapper[4982]: I0123 09:26:35.928611 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hmt2w"] Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.062352 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-config-data\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.062564 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgw6j\" (UniqueName: \"kubernetes.io/projected/a403bece-dda1-45c7-a804-4a1d7329d2d1-kube-api-access-pgw6j\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.063131 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-combined-ca-bundle\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.165516 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-combined-ca-bundle\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.165640 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-config-data\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.165705 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgw6j\" (UniqueName: \"kubernetes.io/projected/a403bece-dda1-45c7-a804-4a1d7329d2d1-kube-api-access-pgw6j\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.171428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-combined-ca-bundle\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.171428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-config-data\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.183980 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgw6j\" (UniqueName: \"kubernetes.io/projected/a403bece-dda1-45c7-a804-4a1d7329d2d1-kube-api-access-pgw6j\") pod \"keystone-db-sync-hmt2w\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.250059 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.761041 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hmt2w"] Jan 23 09:26:36 crc kubenswrapper[4982]: I0123 09:26:36.899574 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmt2w" event={"ID":"a403bece-dda1-45c7-a804-4a1d7329d2d1","Type":"ContainerStarted","Data":"9ba9f46d3e39a120ccad68ca8f935bd841881ca54d0169280030fae494622113"} Jan 23 09:26:37 crc kubenswrapper[4982]: I0123 09:26:37.910576 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmt2w" event={"ID":"a403bece-dda1-45c7-a804-4a1d7329d2d1","Type":"ContainerStarted","Data":"ed5cc9abea52b21c7b9db6a511346dd72b7bcb177aec74a6527b172ed5456083"} Jan 23 09:26:37 crc kubenswrapper[4982]: I0123 09:26:37.934986 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hmt2w" podStartSLOduration=2.9349652969999998 podStartE2EDuration="2.934965297s" podCreationTimestamp="2026-01-23 09:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:37.924431673 +0000 UTC m=+5472.404795059" watchObservedRunningTime="2026-01-23 09:26:37.934965297 +0000 UTC m=+5472.415328683" Jan 23 09:26:38 crc kubenswrapper[4982]: I0123 09:26:38.926508 4982 generic.go:334] "Generic (PLEG): container finished" podID="a403bece-dda1-45c7-a804-4a1d7329d2d1" containerID="ed5cc9abea52b21c7b9db6a511346dd72b7bcb177aec74a6527b172ed5456083" exitCode=0 Jan 23 09:26:38 crc kubenswrapper[4982]: I0123 09:26:38.926571 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmt2w" event={"ID":"a403bece-dda1-45c7-a804-4a1d7329d2d1","Type":"ContainerDied","Data":"ed5cc9abea52b21c7b9db6a511346dd72b7bcb177aec74a6527b172ed5456083"} Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.238516 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.299371 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-combined-ca-bundle\") pod \"a403bece-dda1-45c7-a804-4a1d7329d2d1\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.299918 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-config-data\") pod \"a403bece-dda1-45c7-a804-4a1d7329d2d1\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.300009 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgw6j\" (UniqueName: \"kubernetes.io/projected/a403bece-dda1-45c7-a804-4a1d7329d2d1-kube-api-access-pgw6j\") pod \"a403bece-dda1-45c7-a804-4a1d7329d2d1\" (UID: \"a403bece-dda1-45c7-a804-4a1d7329d2d1\") " Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.305156 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a403bece-dda1-45c7-a804-4a1d7329d2d1-kube-api-access-pgw6j" (OuterVolumeSpecName: "kube-api-access-pgw6j") pod "a403bece-dda1-45c7-a804-4a1d7329d2d1" (UID: "a403bece-dda1-45c7-a804-4a1d7329d2d1"). InnerVolumeSpecName "kube-api-access-pgw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.330387 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a403bece-dda1-45c7-a804-4a1d7329d2d1" (UID: "a403bece-dda1-45c7-a804-4a1d7329d2d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.358057 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-config-data" (OuterVolumeSpecName: "config-data") pod "a403bece-dda1-45c7-a804-4a1d7329d2d1" (UID: "a403bece-dda1-45c7-a804-4a1d7329d2d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.402332 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.402368 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgw6j\" (UniqueName: \"kubernetes.io/projected/a403bece-dda1-45c7-a804-4a1d7329d2d1-kube-api-access-pgw6j\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.402378 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a403bece-dda1-45c7-a804-4a1d7329d2d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.429878 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.944367 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hmt2w" event={"ID":"a403bece-dda1-45c7-a804-4a1d7329d2d1","Type":"ContainerDied","Data":"9ba9f46d3e39a120ccad68ca8f935bd841881ca54d0169280030fae494622113"} Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.944414 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba9f46d3e39a120ccad68ca8f935bd841881ca54d0169280030fae494622113" Jan 23 09:26:40 crc kubenswrapper[4982]: I0123 09:26:40.944444 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hmt2w" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.190520 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d6b47f88c-qnnz4"] Jan 23 09:26:41 crc kubenswrapper[4982]: E0123 09:26:41.190990 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a403bece-dda1-45c7-a804-4a1d7329d2d1" containerName="keystone-db-sync" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.191008 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a403bece-dda1-45c7-a804-4a1d7329d2d1" containerName="keystone-db-sync" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.191205 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a403bece-dda1-45c7-a804-4a1d7329d2d1" containerName="keystone-db-sync" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.192204 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.216164 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6b47f88c-qnnz4"] Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.229439 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xhffq"] Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.230774 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.235766 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.236024 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.236133 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7wfj" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.236248 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.236380 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.264085 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xhffq"] Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.322606 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.322988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-config\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.323012 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-dns-svc\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.323038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7pf\" (UniqueName: \"kubernetes.io/projected/f945ebb2-291c-4762-9c1c-7fdde85c6402-kube-api-access-xl7pf\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.323089 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.424950 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425011 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-fernet-keys\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425077 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-combined-ca-bundle\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425126 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-scripts\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425155 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqbdn\" (UniqueName: \"kubernetes.io/projected/d4d50cc6-6266-4424-86ca-5f7d54590053-kube-api-access-nqbdn\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425179 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-credential-keys\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425200 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-config-data\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425219 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-config\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425253 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-dns-svc\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.425291 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl7pf\" (UniqueName: \"kubernetes.io/projected/f945ebb2-291c-4762-9c1c-7fdde85c6402-kube-api-access-xl7pf\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.426297 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-nb\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.426362 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-sb\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.426993 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-dns-svc\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.427171 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-config\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.454816 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl7pf\" (UniqueName: \"kubernetes.io/projected/f945ebb2-291c-4762-9c1c-7fdde85c6402-kube-api-access-xl7pf\") pod \"dnsmasq-dns-5d6b47f88c-qnnz4\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.515727 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.527786 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqbdn\" (UniqueName: \"kubernetes.io/projected/d4d50cc6-6266-4424-86ca-5f7d54590053-kube-api-access-nqbdn\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.527868 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-credential-keys\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.527901 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-config-data\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.528001 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-fernet-keys\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.528086 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-combined-ca-bundle\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.528125 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-scripts\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.532036 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-scripts\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.532358 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-config-data\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.533433 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-fernet-keys\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.533741 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-combined-ca-bundle\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.536208 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-credential-keys\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.552218 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqbdn\" (UniqueName: \"kubernetes.io/projected/d4d50cc6-6266-4424-86ca-5f7d54590053-kube-api-access-nqbdn\") pod \"keystone-bootstrap-xhffq\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:41 crc kubenswrapper[4982]: I0123 09:26:41.848253 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:42 crc kubenswrapper[4982]: I0123 09:26:42.054827 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d6b47f88c-qnnz4"] Jan 23 09:26:42 crc kubenswrapper[4982]: I0123 09:26:42.353285 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xhffq"] Jan 23 09:26:42 crc kubenswrapper[4982]: W0123 09:26:42.389687 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4d50cc6_6266_4424_86ca_5f7d54590053.slice/crio-5da94bb7b288cc736c92a5860acc136df2678b4bdc9a0dfd1d5ade594284e786 WatchSource:0}: Error finding container 5da94bb7b288cc736c92a5860acc136df2678b4bdc9a0dfd1d5ade594284e786: Status 404 returned error can't find the container with id 5da94bb7b288cc736c92a5860acc136df2678b4bdc9a0dfd1d5ade594284e786 Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.004894 4982 generic.go:334] "Generic (PLEG): container finished" podID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerID="c11ed8f5a8559667841199fea6e7a8bea2716abcc303ee172e520a3d3e188257" exitCode=0 Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.004954 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" event={"ID":"f945ebb2-291c-4762-9c1c-7fdde85c6402","Type":"ContainerDied","Data":"c11ed8f5a8559667841199fea6e7a8bea2716abcc303ee172e520a3d3e188257"} Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.005307 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" event={"ID":"f945ebb2-291c-4762-9c1c-7fdde85c6402","Type":"ContainerStarted","Data":"c272c2e316ea98f700a6a3c3f56a228e73f52bc228749785a622c89ee1e16c43"} Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.006792 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhffq" event={"ID":"d4d50cc6-6266-4424-86ca-5f7d54590053","Type":"ContainerStarted","Data":"8db7576e463e870ae81c645fd1651f5b1e25f9a7e18a45f0121b9c5aa98bec7e"} Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.006825 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhffq" event={"ID":"d4d50cc6-6266-4424-86ca-5f7d54590053","Type":"ContainerStarted","Data":"5da94bb7b288cc736c92a5860acc136df2678b4bdc9a0dfd1d5ade594284e786"} Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.054182 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xhffq" podStartSLOduration=2.0541598150000002 podStartE2EDuration="2.054159815s" podCreationTimestamp="2026-01-23 09:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:43.048351228 +0000 UTC m=+5477.528714614" watchObservedRunningTime="2026-01-23 09:26:43.054159815 +0000 UTC m=+5477.534523201" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.095570 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v947s"] Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.099204 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.111680 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v947s"] Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.166558 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-catalog-content\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.167031 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxddd\" (UniqueName: \"kubernetes.io/projected/61c30f1b-6eae-4c58-beca-53851c85cc3f-kube-api-access-vxddd\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.167154 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-utilities\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.268199 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-catalog-content\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.269359 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxddd\" (UniqueName: \"kubernetes.io/projected/61c30f1b-6eae-4c58-beca-53851c85cc3f-kube-api-access-vxddd\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.269445 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-utilities\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.269545 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-catalog-content\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.269963 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-utilities\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.291659 4982 scope.go:117] "RemoveContainer" containerID="1ab24f08e7c7fda23744b4887c6073ecc6cac7d472943471fcf10074fb2d0d32" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.291999 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxddd\" (UniqueName: \"kubernetes.io/projected/61c30f1b-6eae-4c58-beca-53851c85cc3f-kube-api-access-vxddd\") pod \"redhat-operators-v947s\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.349951 4982 scope.go:117] "RemoveContainer" containerID="f56d592e0fde14ff17e334a7f17a1dfd89cd99dea7c59e16681944597c8f94a3" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.516725 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:43 crc kubenswrapper[4982]: I0123 09:26:43.993725 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v947s"] Jan 23 09:26:43 crc kubenswrapper[4982]: W0123 09:26:43.997972 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61c30f1b_6eae_4c58_beca_53851c85cc3f.slice/crio-67a3fb76e3ad87c6b86ba031b78b6908f051f4301d130d404866e591d9c2a652 WatchSource:0}: Error finding container 67a3fb76e3ad87c6b86ba031b78b6908f051f4301d130d404866e591d9c2a652: Status 404 returned error can't find the container with id 67a3fb76e3ad87c6b86ba031b78b6908f051f4301d130d404866e591d9c2a652 Jan 23 09:26:44 crc kubenswrapper[4982]: I0123 09:26:44.018482 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" event={"ID":"f945ebb2-291c-4762-9c1c-7fdde85c6402","Type":"ContainerStarted","Data":"4fd3c8a55d054f8fc261924ffd7b6baf0e733d5f9a17e6c822925a264108665a"} Jan 23 09:26:44 crc kubenswrapper[4982]: I0123 09:26:44.018655 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:44 crc kubenswrapper[4982]: I0123 09:26:44.024804 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v947s" event={"ID":"61c30f1b-6eae-4c58-beca-53851c85cc3f","Type":"ContainerStarted","Data":"67a3fb76e3ad87c6b86ba031b78b6908f051f4301d130d404866e591d9c2a652"} Jan 23 09:26:44 crc kubenswrapper[4982]: I0123 09:26:44.043591 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" podStartSLOduration=3.043566435 podStartE2EDuration="3.043566435s" podCreationTimestamp="2026-01-23 09:26:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:44.042686252 +0000 UTC m=+5478.523049648" watchObservedRunningTime="2026-01-23 09:26:44.043566435 +0000 UTC m=+5478.523929821" Jan 23 09:26:45 crc kubenswrapper[4982]: I0123 09:26:45.035167 4982 generic.go:334] "Generic (PLEG): container finished" podID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerID="3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848" exitCode=0 Jan 23 09:26:45 crc kubenswrapper[4982]: I0123 09:26:45.035256 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v947s" event={"ID":"61c30f1b-6eae-4c58-beca-53851c85cc3f","Type":"ContainerDied","Data":"3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848"} Jan 23 09:26:45 crc kubenswrapper[4982]: I0123 09:26:45.037478 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:26:46 crc kubenswrapper[4982]: I0123 09:26:46.046893 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v947s" event={"ID":"61c30f1b-6eae-4c58-beca-53851c85cc3f","Type":"ContainerStarted","Data":"33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16"} Jan 23 09:26:47 crc kubenswrapper[4982]: I0123 09:26:47.056199 4982 generic.go:334] "Generic (PLEG): container finished" podID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerID="33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16" exitCode=0 Jan 23 09:26:47 crc kubenswrapper[4982]: I0123 09:26:47.056348 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v947s" event={"ID":"61c30f1b-6eae-4c58-beca-53851c85cc3f","Type":"ContainerDied","Data":"33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16"} Jan 23 09:26:47 crc kubenswrapper[4982]: I0123 09:26:47.058186 4982 generic.go:334] "Generic (PLEG): container finished" podID="d4d50cc6-6266-4424-86ca-5f7d54590053" containerID="8db7576e463e870ae81c645fd1651f5b1e25f9a7e18a45f0121b9c5aa98bec7e" exitCode=0 Jan 23 09:26:47 crc kubenswrapper[4982]: I0123 09:26:47.058225 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhffq" event={"ID":"d4d50cc6-6266-4424-86ca-5f7d54590053","Type":"ContainerDied","Data":"8db7576e463e870ae81c645fd1651f5b1e25f9a7e18a45f0121b9c5aa98bec7e"} Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.068450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v947s" event={"ID":"61c30f1b-6eae-4c58-beca-53851c85cc3f","Type":"ContainerStarted","Data":"73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584"} Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.087868 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v947s" podStartSLOduration=2.568147471 podStartE2EDuration="5.087853931s" podCreationTimestamp="2026-01-23 09:26:43 +0000 UTC" firstStartedPulling="2026-01-23 09:26:45.037263012 +0000 UTC m=+5479.517626398" lastFinishedPulling="2026-01-23 09:26:47.556969472 +0000 UTC m=+5482.037332858" observedRunningTime="2026-01-23 09:26:48.086144494 +0000 UTC m=+5482.566507880" watchObservedRunningTime="2026-01-23 09:26:48.087853931 +0000 UTC m=+5482.568217317" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.459506 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.578309 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-credential-keys\") pod \"d4d50cc6-6266-4424-86ca-5f7d54590053\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.578381 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-combined-ca-bundle\") pod \"d4d50cc6-6266-4424-86ca-5f7d54590053\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.578451 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-fernet-keys\") pod \"d4d50cc6-6266-4424-86ca-5f7d54590053\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.578533 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-config-data\") pod \"d4d50cc6-6266-4424-86ca-5f7d54590053\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.578589 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqbdn\" (UniqueName: \"kubernetes.io/projected/d4d50cc6-6266-4424-86ca-5f7d54590053-kube-api-access-nqbdn\") pod \"d4d50cc6-6266-4424-86ca-5f7d54590053\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.578613 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-scripts\") pod \"d4d50cc6-6266-4424-86ca-5f7d54590053\" (UID: \"d4d50cc6-6266-4424-86ca-5f7d54590053\") " Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.585865 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d4d50cc6-6266-4424-86ca-5f7d54590053" (UID: "d4d50cc6-6266-4424-86ca-5f7d54590053"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.586329 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d50cc6-6266-4424-86ca-5f7d54590053-kube-api-access-nqbdn" (OuterVolumeSpecName: "kube-api-access-nqbdn") pod "d4d50cc6-6266-4424-86ca-5f7d54590053" (UID: "d4d50cc6-6266-4424-86ca-5f7d54590053"). InnerVolumeSpecName "kube-api-access-nqbdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.586377 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d4d50cc6-6266-4424-86ca-5f7d54590053" (UID: "d4d50cc6-6266-4424-86ca-5f7d54590053"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.592793 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-scripts" (OuterVolumeSpecName: "scripts") pod "d4d50cc6-6266-4424-86ca-5f7d54590053" (UID: "d4d50cc6-6266-4424-86ca-5f7d54590053"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.607922 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-config-data" (OuterVolumeSpecName: "config-data") pod "d4d50cc6-6266-4424-86ca-5f7d54590053" (UID: "d4d50cc6-6266-4424-86ca-5f7d54590053"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.612991 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4d50cc6-6266-4424-86ca-5f7d54590053" (UID: "d4d50cc6-6266-4424-86ca-5f7d54590053"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.680602 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.680667 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqbdn\" (UniqueName: \"kubernetes.io/projected/d4d50cc6-6266-4424-86ca-5f7d54590053-kube-api-access-nqbdn\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.680680 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.680689 4982 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.681369 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:48 crc kubenswrapper[4982]: I0123 09:26:48.681388 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4d50cc6-6266-4424-86ca-5f7d54590053-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.078076 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xhffq" event={"ID":"d4d50cc6-6266-4424-86ca-5f7d54590053","Type":"ContainerDied","Data":"5da94bb7b288cc736c92a5860acc136df2678b4bdc9a0dfd1d5ade594284e786"} Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.078144 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5da94bb7b288cc736c92a5860acc136df2678b4bdc9a0dfd1d5ade594284e786" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.078089 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xhffq" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.159048 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xhffq"] Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.166453 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xhffq"] Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.251930 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-62b62"] Jan 23 09:26:49 crc kubenswrapper[4982]: E0123 09:26:49.252445 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d50cc6-6266-4424-86ca-5f7d54590053" containerName="keystone-bootstrap" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.252472 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d50cc6-6266-4424-86ca-5f7d54590053" containerName="keystone-bootstrap" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.252733 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d50cc6-6266-4424-86ca-5f7d54590053" containerName="keystone-bootstrap" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.253525 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.256359 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.256450 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.256618 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.256794 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7wfj" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.257344 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.266691 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62b62"] Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.292787 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-combined-ca-bundle\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.292864 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-scripts\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.292925 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-config-data\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.292965 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-fernet-keys\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.293091 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8fcr\" (UniqueName: \"kubernetes.io/projected/e4d651a9-601e-4859-abd7-03ba425d148e-kube-api-access-k8fcr\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.293133 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-credential-keys\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.394713 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-combined-ca-bundle\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.394792 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-scripts\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.394856 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-config-data\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.394892 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-fernet-keys\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.394964 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8fcr\" (UniqueName: \"kubernetes.io/projected/e4d651a9-601e-4859-abd7-03ba425d148e-kube-api-access-k8fcr\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.394995 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-credential-keys\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.399904 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-combined-ca-bundle\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.400046 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-config-data\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.401070 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-credential-keys\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.401130 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-fernet-keys\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.405268 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-scripts\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.413585 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8fcr\" (UniqueName: \"kubernetes.io/projected/e4d651a9-601e-4859-abd7-03ba425d148e-kube-api-access-k8fcr\") pod \"keystone-bootstrap-62b62\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.595435 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:49 crc kubenswrapper[4982]: I0123 09:26:49.841946 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4d50cc6-6266-4424-86ca-5f7d54590053" path="/var/lib/kubelet/pods/d4d50cc6-6266-4424-86ca-5f7d54590053/volumes" Jan 23 09:26:50 crc kubenswrapper[4982]: I0123 09:26:50.067357 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-62b62"] Jan 23 09:26:50 crc kubenswrapper[4982]: I0123 09:26:50.085785 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62b62" event={"ID":"e4d651a9-601e-4859-abd7-03ba425d148e","Type":"ContainerStarted","Data":"b1e5babbe384cac2b781105bf787aa45f0aff99510b54e99a33202843e257fd5"} Jan 23 09:26:51 crc kubenswrapper[4982]: I0123 09:26:51.094848 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62b62" event={"ID":"e4d651a9-601e-4859-abd7-03ba425d148e","Type":"ContainerStarted","Data":"2756e5e50b110a99642009925c601d7400ae09ca34eab3625b144d8f7e5627ff"} Jan 23 09:26:51 crc kubenswrapper[4982]: I0123 09:26:51.127191 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-62b62" podStartSLOduration=2.127170844 podStartE2EDuration="2.127170844s" podCreationTimestamp="2026-01-23 09:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:51.112675762 +0000 UTC m=+5485.593039168" watchObservedRunningTime="2026-01-23 09:26:51.127170844 +0000 UTC m=+5485.607534220" Jan 23 09:26:51 crc kubenswrapper[4982]: I0123 09:26:51.517822 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:26:51 crc kubenswrapper[4982]: I0123 09:26:51.571928 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cccd8794f-t4xzg"] Jan 23 09:26:51 crc kubenswrapper[4982]: I0123 09:26:51.572182 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" podUID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerName="dnsmasq-dns" containerID="cri-o://e35c940fbd57ef53feef54ce99b2b8ad779048ed1b110b5bc44fafd571dc030b" gracePeriod=10 Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.104253 4982 generic.go:334] "Generic (PLEG): container finished" podID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerID="e35c940fbd57ef53feef54ce99b2b8ad779048ed1b110b5bc44fafd571dc030b" exitCode=0 Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.104313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" event={"ID":"c8cb0abd-1dea-41e4-be24-980586c0ae1c","Type":"ContainerDied","Data":"e35c940fbd57ef53feef54ce99b2b8ad779048ed1b110b5bc44fafd571dc030b"} Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.881537 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.983023 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-dns-svc\") pod \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.983140 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-nb\") pod \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.983215 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-config\") pod \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.983247 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlh4d\" (UniqueName: \"kubernetes.io/projected/c8cb0abd-1dea-41e4-be24-980586c0ae1c-kube-api-access-hlh4d\") pod \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " Jan 23 09:26:52 crc kubenswrapper[4982]: I0123 09:26:52.983287 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-sb\") pod \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\" (UID: \"c8cb0abd-1dea-41e4-be24-980586c0ae1c\") " Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.010999 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cb0abd-1dea-41e4-be24-980586c0ae1c-kube-api-access-hlh4d" (OuterVolumeSpecName: "kube-api-access-hlh4d") pod "c8cb0abd-1dea-41e4-be24-980586c0ae1c" (UID: "c8cb0abd-1dea-41e4-be24-980586c0ae1c"). InnerVolumeSpecName "kube-api-access-hlh4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.033697 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8cb0abd-1dea-41e4-be24-980586c0ae1c" (UID: "c8cb0abd-1dea-41e4-be24-980586c0ae1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.036048 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-config" (OuterVolumeSpecName: "config") pod "c8cb0abd-1dea-41e4-be24-980586c0ae1c" (UID: "c8cb0abd-1dea-41e4-be24-980586c0ae1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.036384 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8cb0abd-1dea-41e4-be24-980586c0ae1c" (UID: "c8cb0abd-1dea-41e4-be24-980586c0ae1c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.046247 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8cb0abd-1dea-41e4-be24-980586c0ae1c" (UID: "c8cb0abd-1dea-41e4-be24-980586c0ae1c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.085334 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.085381 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlh4d\" (UniqueName: \"kubernetes.io/projected/c8cb0abd-1dea-41e4-be24-980586c0ae1c-kube-api-access-hlh4d\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.085396 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.085409 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.085421 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8cb0abd-1dea-41e4-be24-980586c0ae1c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.114474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" event={"ID":"c8cb0abd-1dea-41e4-be24-980586c0ae1c","Type":"ContainerDied","Data":"e2c0899be908f6de8c680ca78bdb9c22ccec6e69e2726f37569a29490587abaf"} Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.114523 4982 scope.go:117] "RemoveContainer" containerID="e35c940fbd57ef53feef54ce99b2b8ad779048ed1b110b5bc44fafd571dc030b" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.114640 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cccd8794f-t4xzg" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.150477 4982 scope.go:117] "RemoveContainer" containerID="cc288a56defdeebe3613f7acf1676a70a3e08336d0ad6c4b34bb3fb36dfca46b" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.164352 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cccd8794f-t4xzg"] Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.179420 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cccd8794f-t4xzg"] Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.516908 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.517313 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.558667 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:53 crc kubenswrapper[4982]: I0123 09:26:53.837708 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" path="/var/lib/kubelet/pods/c8cb0abd-1dea-41e4-be24-980586c0ae1c/volumes" Jan 23 09:26:54 crc kubenswrapper[4982]: I0123 09:26:54.126995 4982 generic.go:334] "Generic (PLEG): container finished" podID="e4d651a9-601e-4859-abd7-03ba425d148e" containerID="2756e5e50b110a99642009925c601d7400ae09ca34eab3625b144d8f7e5627ff" exitCode=0 Jan 23 09:26:54 crc kubenswrapper[4982]: I0123 09:26:54.127108 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62b62" event={"ID":"e4d651a9-601e-4859-abd7-03ba425d148e","Type":"ContainerDied","Data":"2756e5e50b110a99642009925c601d7400ae09ca34eab3625b144d8f7e5627ff"} Jan 23 09:26:54 crc kubenswrapper[4982]: I0123 09:26:54.176497 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:54 crc kubenswrapper[4982]: I0123 09:26:54.228311 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v947s"] Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.519448 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.632909 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-scripts\") pod \"e4d651a9-601e-4859-abd7-03ba425d148e\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.632977 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-fernet-keys\") pod \"e4d651a9-601e-4859-abd7-03ba425d148e\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.633010 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-config-data\") pod \"e4d651a9-601e-4859-abd7-03ba425d148e\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.633066 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8fcr\" (UniqueName: \"kubernetes.io/projected/e4d651a9-601e-4859-abd7-03ba425d148e-kube-api-access-k8fcr\") pod \"e4d651a9-601e-4859-abd7-03ba425d148e\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.633126 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-combined-ca-bundle\") pod \"e4d651a9-601e-4859-abd7-03ba425d148e\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.633565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-credential-keys\") pod \"e4d651a9-601e-4859-abd7-03ba425d148e\" (UID: \"e4d651a9-601e-4859-abd7-03ba425d148e\") " Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.639173 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d651a9-601e-4859-abd7-03ba425d148e-kube-api-access-k8fcr" (OuterVolumeSpecName: "kube-api-access-k8fcr") pod "e4d651a9-601e-4859-abd7-03ba425d148e" (UID: "e4d651a9-601e-4859-abd7-03ba425d148e"). InnerVolumeSpecName "kube-api-access-k8fcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.639792 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e4d651a9-601e-4859-abd7-03ba425d148e" (UID: "e4d651a9-601e-4859-abd7-03ba425d148e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.644395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-scripts" (OuterVolumeSpecName: "scripts") pod "e4d651a9-601e-4859-abd7-03ba425d148e" (UID: "e4d651a9-601e-4859-abd7-03ba425d148e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.647362 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e4d651a9-601e-4859-abd7-03ba425d148e" (UID: "e4d651a9-601e-4859-abd7-03ba425d148e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.664657 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-config-data" (OuterVolumeSpecName: "config-data") pod "e4d651a9-601e-4859-abd7-03ba425d148e" (UID: "e4d651a9-601e-4859-abd7-03ba425d148e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.668277 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4d651a9-601e-4859-abd7-03ba425d148e" (UID: "e4d651a9-601e-4859-abd7-03ba425d148e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.735264 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.735456 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.735515 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.735577 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8fcr\" (UniqueName: \"kubernetes.io/projected/e4d651a9-601e-4859-abd7-03ba425d148e-kube-api-access-k8fcr\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.735636 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:55 crc kubenswrapper[4982]: I0123 09:26:55.735724 4982 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e4d651a9-601e-4859-abd7-03ba425d148e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.148568 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-62b62" event={"ID":"e4d651a9-601e-4859-abd7-03ba425d148e","Type":"ContainerDied","Data":"b1e5babbe384cac2b781105bf787aa45f0aff99510b54e99a33202843e257fd5"} Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.149292 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1e5babbe384cac2b781105bf787aa45f0aff99510b54e99a33202843e257fd5" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.148723 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v947s" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="registry-server" containerID="cri-o://73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584" gracePeriod=2 Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.148701 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-62b62" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.320973 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c44b77bf8-p9kls"] Jan 23 09:26:56 crc kubenswrapper[4982]: E0123 09:26:56.321344 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerName="dnsmasq-dns" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.321360 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerName="dnsmasq-dns" Jan 23 09:26:56 crc kubenswrapper[4982]: E0123 09:26:56.321374 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerName="init" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.321380 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerName="init" Jan 23 09:26:56 crc kubenswrapper[4982]: E0123 09:26:56.321427 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d651a9-601e-4859-abd7-03ba425d148e" containerName="keystone-bootstrap" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.321433 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d651a9-601e-4859-abd7-03ba425d148e" containerName="keystone-bootstrap" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.321614 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d651a9-601e-4859-abd7-03ba425d148e" containerName="keystone-bootstrap" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.321652 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cb0abd-1dea-41e4-be24-980586c0ae1c" containerName="dnsmasq-dns" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.322235 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.324526 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.324659 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.324889 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q7wfj" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.324962 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.327201 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.327219 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.345331 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-fernet-keys\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.345533 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-scripts\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.345630 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-credential-keys\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.346712 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-combined-ca-bundle\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.346829 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-internal-tls-certs\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.346925 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-public-tls-certs\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.347013 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc59t\" (UniqueName: \"kubernetes.io/projected/c27bb7f3-c125-4255-8fa7-fcdd0127119d-kube-api-access-vc59t\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.347098 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-config-data\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.392543 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c44b77bf8-p9kls"] Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.448827 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-credential-keys\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.448888 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-combined-ca-bundle\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.448914 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-internal-tls-certs\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.448966 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-public-tls-certs\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.449003 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc59t\" (UniqueName: \"kubernetes.io/projected/c27bb7f3-c125-4255-8fa7-fcdd0127119d-kube-api-access-vc59t\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.449041 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-config-data\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.449063 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-fernet-keys\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.449103 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-scripts\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.455805 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-public-tls-certs\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.455997 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-config-data\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.456189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-fernet-keys\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.456369 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-scripts\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.456451 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-credential-keys\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.456535 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-combined-ca-bundle\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.458793 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27bb7f3-c125-4255-8fa7-fcdd0127119d-internal-tls-certs\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.471421 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc59t\" (UniqueName: \"kubernetes.io/projected/c27bb7f3-c125-4255-8fa7-fcdd0127119d-kube-api-access-vc59t\") pod \"keystone-c44b77bf8-p9kls\" (UID: \"c27bb7f3-c125-4255-8fa7-fcdd0127119d\") " pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.645616 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.653572 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.753974 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-catalog-content\") pod \"61c30f1b-6eae-4c58-beca-53851c85cc3f\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.754320 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxddd\" (UniqueName: \"kubernetes.io/projected/61c30f1b-6eae-4c58-beca-53851c85cc3f-kube-api-access-vxddd\") pod \"61c30f1b-6eae-4c58-beca-53851c85cc3f\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.754339 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-utilities\") pod \"61c30f1b-6eae-4c58-beca-53851c85cc3f\" (UID: \"61c30f1b-6eae-4c58-beca-53851c85cc3f\") " Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.755607 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-utilities" (OuterVolumeSpecName: "utilities") pod "61c30f1b-6eae-4c58-beca-53851c85cc3f" (UID: "61c30f1b-6eae-4c58-beca-53851c85cc3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.759238 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c30f1b-6eae-4c58-beca-53851c85cc3f-kube-api-access-vxddd" (OuterVolumeSpecName: "kube-api-access-vxddd") pod "61c30f1b-6eae-4c58-beca-53851c85cc3f" (UID: "61c30f1b-6eae-4c58-beca-53851c85cc3f"). InnerVolumeSpecName "kube-api-access-vxddd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.856541 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxddd\" (UniqueName: \"kubernetes.io/projected/61c30f1b-6eae-4c58-beca-53851c85cc3f-kube-api-access-vxddd\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:56 crc kubenswrapper[4982]: I0123 09:26:56.856575 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.086136 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c44b77bf8-p9kls"] Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.159523 4982 generic.go:334] "Generic (PLEG): container finished" podID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerID="73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584" exitCode=0 Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.159590 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v947s" event={"ID":"61c30f1b-6eae-4c58-beca-53851c85cc3f","Type":"ContainerDied","Data":"73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584"} Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.159592 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v947s" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.159620 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v947s" event={"ID":"61c30f1b-6eae-4c58-beca-53851c85cc3f","Type":"ContainerDied","Data":"67a3fb76e3ad87c6b86ba031b78b6908f051f4301d130d404866e591d9c2a652"} Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.159660 4982 scope.go:117] "RemoveContainer" containerID="73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.161799 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c44b77bf8-p9kls" event={"ID":"c27bb7f3-c125-4255-8fa7-fcdd0127119d","Type":"ContainerStarted","Data":"3a654ca9a5f1f5217b82013640a98ecc524bbfcc2d89104531b3cff763bba9bc"} Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.184509 4982 scope.go:117] "RemoveContainer" containerID="33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.208325 4982 scope.go:117] "RemoveContainer" containerID="3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.239055 4982 scope.go:117] "RemoveContainer" containerID="73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584" Jan 23 09:26:57 crc kubenswrapper[4982]: E0123 09:26:57.239574 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584\": container with ID starting with 73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584 not found: ID does not exist" containerID="73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.239618 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584"} err="failed to get container status \"73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584\": rpc error: code = NotFound desc = could not find container \"73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584\": container with ID starting with 73533cc0e4b6dbdc2556b2bd6d32405061af71814894618ac1c0f2e1710da584 not found: ID does not exist" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.239663 4982 scope.go:117] "RemoveContainer" containerID="33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16" Jan 23 09:26:57 crc kubenswrapper[4982]: E0123 09:26:57.240118 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16\": container with ID starting with 33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16 not found: ID does not exist" containerID="33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.240169 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16"} err="failed to get container status \"33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16\": rpc error: code = NotFound desc = could not find container \"33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16\": container with ID starting with 33e55c63cc7b2169aebb9a29382a277f02ca38a87181b19c8e56f3acc3d75f16 not found: ID does not exist" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.240199 4982 scope.go:117] "RemoveContainer" containerID="3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848" Jan 23 09:26:57 crc kubenswrapper[4982]: E0123 09:26:57.240650 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848\": container with ID starting with 3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848 not found: ID does not exist" containerID="3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848" Jan 23 09:26:57 crc kubenswrapper[4982]: I0123 09:26:57.240684 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848"} err="failed to get container status \"3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848\": rpc error: code = NotFound desc = could not find container \"3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848\": container with ID starting with 3239899752b5300541b5cdab90bef1d190ccd109d984b862d3e891ebb7679848 not found: ID does not exist" Jan 23 09:26:58 crc kubenswrapper[4982]: I0123 09:26:58.120093 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61c30f1b-6eae-4c58-beca-53851c85cc3f" (UID: "61c30f1b-6eae-4c58-beca-53851c85cc3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:26:58 crc kubenswrapper[4982]: I0123 09:26:58.173007 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c44b77bf8-p9kls" event={"ID":"c27bb7f3-c125-4255-8fa7-fcdd0127119d","Type":"ContainerStarted","Data":"21b4c4a47c458a89ef9b7121189b92e9c1ee4a3449fd266d2f41b6ee7d1fa018"} Jan 23 09:26:58 crc kubenswrapper[4982]: I0123 09:26:58.173105 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:26:58 crc kubenswrapper[4982]: I0123 09:26:58.183003 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61c30f1b-6eae-4c58-beca-53851c85cc3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:58 crc kubenswrapper[4982]: I0123 09:26:58.194857 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c44b77bf8-p9kls" podStartSLOduration=2.194835552 podStartE2EDuration="2.194835552s" podCreationTimestamp="2026-01-23 09:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:26:58.18699868 +0000 UTC m=+5492.667362086" watchObservedRunningTime="2026-01-23 09:26:58.194835552 +0000 UTC m=+5492.675198938" Jan 23 09:26:58 crc kubenswrapper[4982]: I0123 09:26:58.401230 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v947s"] Jan 23 09:26:58 crc kubenswrapper[4982]: I0123 09:26:58.411260 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v947s"] Jan 23 09:26:59 crc kubenswrapper[4982]: I0123 09:26:59.841427 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" path="/var/lib/kubelet/pods/61c30f1b-6eae-4c58-beca-53851c85cc3f/volumes" Jan 23 09:27:11 crc kubenswrapper[4982]: I0123 09:27:11.436335 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:27:11 crc kubenswrapper[4982]: I0123 09:27:11.436980 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:27:28 crc kubenswrapper[4982]: I0123 09:27:28.197626 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c44b77bf8-p9kls" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.408194 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 09:27:30 crc kubenswrapper[4982]: E0123 09:27:30.408947 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="extract-content" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.408962 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="extract-content" Jan 23 09:27:30 crc kubenswrapper[4982]: E0123 09:27:30.408977 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="extract-utilities" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.408983 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="extract-utilities" Jan 23 09:27:30 crc kubenswrapper[4982]: E0123 09:27:30.408996 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="registry-server" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.409003 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="registry-server" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.409189 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c30f1b-6eae-4c58-beca-53851c85cc3f" containerName="registry-server" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.409847 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.413131 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.413237 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.413274 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8cwf6" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.427106 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.601913 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fkz\" (UniqueName: \"kubernetes.io/projected/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-kube-api-access-57fkz\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.602060 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config-secret\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.602150 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.602673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.704232 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.704340 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fkz\" (UniqueName: \"kubernetes.io/projected/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-kube-api-access-57fkz\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.704410 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config-secret\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.704472 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.705271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.716289 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config-secret\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.716403 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.720506 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fkz\" (UniqueName: \"kubernetes.io/projected/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-kube-api-access-57fkz\") pod \"openstackclient\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " pod="openstack/openstackclient" Jan 23 09:27:30 crc kubenswrapper[4982]: I0123 09:27:30.742284 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:27:31 crc kubenswrapper[4982]: I0123 09:27:31.159325 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:27:31 crc kubenswrapper[4982]: I0123 09:27:31.514177 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1","Type":"ContainerStarted","Data":"ed38a1ca8c1e5669c3259b885b55f496e0dbf5930c821ad04c149000882a974f"} Jan 23 09:27:31 crc kubenswrapper[4982]: I0123 09:27:31.514217 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1","Type":"ContainerStarted","Data":"4eb4fa70b733bfba293e13cd86883f6156cbe3f404ef8b5c4b9b2f1915025367"} Jan 23 09:27:31 crc kubenswrapper[4982]: I0123 09:27:31.535506 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.535482899 podStartE2EDuration="1.535482899s" podCreationTimestamp="2026-01-23 09:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:27:31.527246387 +0000 UTC m=+5526.007609783" watchObservedRunningTime="2026-01-23 09:27:31.535482899 +0000 UTC m=+5526.015846305" Jan 23 09:27:41 crc kubenswrapper[4982]: I0123 09:27:41.435466 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:27:41 crc kubenswrapper[4982]: I0123 09:27:41.436092 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.436133 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.436814 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.436862 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.437712 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fe69a70f46ce57d41d7e5c077b325e9c4d29babec4475eb09ec573022ab581a"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.437778 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://7fe69a70f46ce57d41d7e5c077b325e9c4d29babec4475eb09ec573022ab581a" gracePeriod=600 Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.857759 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="7fe69a70f46ce57d41d7e5c077b325e9c4d29babec4475eb09ec573022ab581a" exitCode=0 Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.857838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"7fe69a70f46ce57d41d7e5c077b325e9c4d29babec4475eb09ec573022ab581a"} Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.858123 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd"} Jan 23 09:28:11 crc kubenswrapper[4982]: I0123 09:28:11.858153 4982 scope.go:117] "RemoveContainer" containerID="74326deff94df1f7414d5193695aa5a85a83c3d25f2195b5365b158bc71b24bf" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.045926 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f0a6-account-create-update-hwtqd"] Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.047683 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.049588 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.055490 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hc5mz"] Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.056705 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.068894 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f0a6-account-create-update-hwtqd"] Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.077208 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hc5mz"] Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.157156 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jxq\" (UniqueName: \"kubernetes.io/projected/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-kube-api-access-45jxq\") pod \"barbican-f0a6-account-create-update-hwtqd\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.157277 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-operator-scripts\") pod \"barbican-f0a6-account-create-update-hwtqd\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.259150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jxq\" (UniqueName: \"kubernetes.io/projected/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-kube-api-access-45jxq\") pod \"barbican-f0a6-account-create-update-hwtqd\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.259429 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-operator-scripts\") pod \"barbican-f0a6-account-create-update-hwtqd\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.259570 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-operator-scripts\") pod \"barbican-db-create-hc5mz\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.260501 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v696l\" (UniqueName: \"kubernetes.io/projected/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-kube-api-access-v696l\") pod \"barbican-db-create-hc5mz\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.262011 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-operator-scripts\") pod \"barbican-f0a6-account-create-update-hwtqd\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.278396 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jxq\" (UniqueName: \"kubernetes.io/projected/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-kube-api-access-45jxq\") pod \"barbican-f0a6-account-create-update-hwtqd\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.362579 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-operator-scripts\") pod \"barbican-db-create-hc5mz\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.362668 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v696l\" (UniqueName: \"kubernetes.io/projected/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-kube-api-access-v696l\") pod \"barbican-db-create-hc5mz\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.370422 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-operator-scripts\") pod \"barbican-db-create-hc5mz\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.371034 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.397191 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v696l\" (UniqueName: \"kubernetes.io/projected/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-kube-api-access-v696l\") pod \"barbican-db-create-hc5mz\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.694442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:06 crc kubenswrapper[4982]: I0123 09:29:06.835209 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f0a6-account-create-update-hwtqd"] Jan 23 09:29:07 crc kubenswrapper[4982]: I0123 09:29:07.120920 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hc5mz"] Jan 23 09:29:07 crc kubenswrapper[4982]: W0123 09:29:07.130150 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod746e2f37_b40c_43b9_b1ca_7f51b4c1eff4.slice/crio-4c24249f00497f25aac4d502b3654336df4384f0e216fb8b38dbee41f386fef0 WatchSource:0}: Error finding container 4c24249f00497f25aac4d502b3654336df4384f0e216fb8b38dbee41f386fef0: Status 404 returned error can't find the container with id 4c24249f00497f25aac4d502b3654336df4384f0e216fb8b38dbee41f386fef0 Jan 23 09:29:07 crc kubenswrapper[4982]: I0123 09:29:07.399257 4982 generic.go:334] "Generic (PLEG): container finished" podID="3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab" containerID="b7076c268357d5d2739a2ad14bc0c18e79807973d8854181342597e987b8df49" exitCode=0 Jan 23 09:29:07 crc kubenswrapper[4982]: I0123 09:29:07.399333 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f0a6-account-create-update-hwtqd" event={"ID":"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab","Type":"ContainerDied","Data":"b7076c268357d5d2739a2ad14bc0c18e79807973d8854181342597e987b8df49"} Jan 23 09:29:07 crc kubenswrapper[4982]: I0123 09:29:07.399414 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f0a6-account-create-update-hwtqd" event={"ID":"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab","Type":"ContainerStarted","Data":"06f2f8c258cd57080beb6c341d07707b617e9ee00e562efde1ee674695cd9863"} Jan 23 09:29:07 crc kubenswrapper[4982]: I0123 09:29:07.403832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hc5mz" event={"ID":"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4","Type":"ContainerStarted","Data":"bedd16242fbcc4e9f3d6f27c9e0551be756c78119068a6445508406993e56141"} Jan 23 09:29:07 crc kubenswrapper[4982]: I0123 09:29:07.403875 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hc5mz" event={"ID":"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4","Type":"ContainerStarted","Data":"4c24249f00497f25aac4d502b3654336df4384f0e216fb8b38dbee41f386fef0"} Jan 23 09:29:07 crc kubenswrapper[4982]: I0123 09:29:07.433035 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-hc5mz" podStartSLOduration=1.4330058110000001 podStartE2EDuration="1.433005811s" podCreationTimestamp="2026-01-23 09:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:07.430120503 +0000 UTC m=+5621.910483889" watchObservedRunningTime="2026-01-23 09:29:07.433005811 +0000 UTC m=+5621.913369187" Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.413893 4982 generic.go:334] "Generic (PLEG): container finished" podID="746e2f37-b40c-43b9-b1ca-7f51b4c1eff4" containerID="bedd16242fbcc4e9f3d6f27c9e0551be756c78119068a6445508406993e56141" exitCode=0 Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.413998 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hc5mz" event={"ID":"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4","Type":"ContainerDied","Data":"bedd16242fbcc4e9f3d6f27c9e0551be756c78119068a6445508406993e56141"} Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.728485 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.758249 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-operator-scripts\") pod \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.758351 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45jxq\" (UniqueName: \"kubernetes.io/projected/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-kube-api-access-45jxq\") pod \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\" (UID: \"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab\") " Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.759291 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab" (UID: "3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.767868 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-kube-api-access-45jxq" (OuterVolumeSpecName: "kube-api-access-45jxq") pod "3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab" (UID: "3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab"). InnerVolumeSpecName "kube-api-access-45jxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.859701 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45jxq\" (UniqueName: \"kubernetes.io/projected/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-kube-api-access-45jxq\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:08 crc kubenswrapper[4982]: I0123 09:29:08.859735 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.424767 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f0a6-account-create-update-hwtqd" event={"ID":"3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab","Type":"ContainerDied","Data":"06f2f8c258cd57080beb6c341d07707b617e9ee00e562efde1ee674695cd9863"} Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.424836 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06f2f8c258cd57080beb6c341d07707b617e9ee00e562efde1ee674695cd9863" Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.424795 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f0a6-account-create-update-hwtqd" Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.703406 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.772736 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-operator-scripts\") pod \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.772796 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v696l\" (UniqueName: \"kubernetes.io/projected/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-kube-api-access-v696l\") pod \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\" (UID: \"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4\") " Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.773765 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "746e2f37-b40c-43b9-b1ca-7f51b4c1eff4" (UID: "746e2f37-b40c-43b9-b1ca-7f51b4c1eff4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.777519 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-kube-api-access-v696l" (OuterVolumeSpecName: "kube-api-access-v696l") pod "746e2f37-b40c-43b9-b1ca-7f51b4c1eff4" (UID: "746e2f37-b40c-43b9-b1ca-7f51b4c1eff4"). InnerVolumeSpecName "kube-api-access-v696l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.874592 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:09 crc kubenswrapper[4982]: I0123 09:29:09.874652 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v696l\" (UniqueName: \"kubernetes.io/projected/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4-kube-api-access-v696l\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:10 crc kubenswrapper[4982]: I0123 09:29:10.435546 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hc5mz" event={"ID":"746e2f37-b40c-43b9-b1ca-7f51b4c1eff4","Type":"ContainerDied","Data":"4c24249f00497f25aac4d502b3654336df4384f0e216fb8b38dbee41f386fef0"} Jan 23 09:29:10 crc kubenswrapper[4982]: I0123 09:29:10.435941 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c24249f00497f25aac4d502b3654336df4384f0e216fb8b38dbee41f386fef0" Jan 23 09:29:10 crc kubenswrapper[4982]: I0123 09:29:10.435685 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hc5mz" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.213212 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-j6bdp"] Jan 23 09:29:11 crc kubenswrapper[4982]: E0123 09:29:11.213706 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab" containerName="mariadb-account-create-update" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.213733 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab" containerName="mariadb-account-create-update" Jan 23 09:29:11 crc kubenswrapper[4982]: E0123 09:29:11.213767 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746e2f37-b40c-43b9-b1ca-7f51b4c1eff4" containerName="mariadb-database-create" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.213777 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="746e2f37-b40c-43b9-b1ca-7f51b4c1eff4" containerName="mariadb-database-create" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.214022 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="746e2f37-b40c-43b9-b1ca-7f51b4c1eff4" containerName="mariadb-database-create" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.214060 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab" containerName="mariadb-account-create-update" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.214827 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.217293 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.217479 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q6zk8" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.224852 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j6bdp"] Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.405937 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-combined-ca-bundle\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.406012 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s25gm\" (UniqueName: \"kubernetes.io/projected/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-kube-api-access-s25gm\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.406041 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-db-sync-config-data\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.511486 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-combined-ca-bundle\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.511596 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s25gm\" (UniqueName: \"kubernetes.io/projected/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-kube-api-access-s25gm\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.511668 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-db-sync-config-data\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.516164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-db-sync-config-data\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.516850 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-combined-ca-bundle\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.531950 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s25gm\" (UniqueName: \"kubernetes.io/projected/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-kube-api-access-s25gm\") pod \"barbican-db-sync-j6bdp\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.539518 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:11 crc kubenswrapper[4982]: I0123 09:29:11.992081 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-j6bdp"] Jan 23 09:29:12 crc kubenswrapper[4982]: I0123 09:29:12.461567 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j6bdp" event={"ID":"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc","Type":"ContainerStarted","Data":"c8ce8058f450bab3ae06ad600da37fbadb5164c1fd7ae203b6c8a9f12c4dc17e"} Jan 23 09:29:12 crc kubenswrapper[4982]: I0123 09:29:12.461934 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j6bdp" event={"ID":"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc","Type":"ContainerStarted","Data":"6ad77c4d3d755b63e75006bb6d68474d723c7542e0b192e681fdb647bcd5e24e"} Jan 23 09:29:12 crc kubenswrapper[4982]: I0123 09:29:12.479845 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-j6bdp" podStartSLOduration=1.479824207 podStartE2EDuration="1.479824207s" podCreationTimestamp="2026-01-23 09:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:12.478545652 +0000 UTC m=+5626.958909048" watchObservedRunningTime="2026-01-23 09:29:12.479824207 +0000 UTC m=+5626.960187593" Jan 23 09:29:14 crc kubenswrapper[4982]: I0123 09:29:14.479363 4982 generic.go:334] "Generic (PLEG): container finished" podID="26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" containerID="c8ce8058f450bab3ae06ad600da37fbadb5164c1fd7ae203b6c8a9f12c4dc17e" exitCode=0 Jan 23 09:29:14 crc kubenswrapper[4982]: I0123 09:29:14.479440 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j6bdp" event={"ID":"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc","Type":"ContainerDied","Data":"c8ce8058f450bab3ae06ad600da37fbadb5164c1fd7ae203b6c8a9f12c4dc17e"} Jan 23 09:29:15 crc kubenswrapper[4982]: I0123 09:29:15.817084 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:15 crc kubenswrapper[4982]: I0123 09:29:15.991706 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-db-sync-config-data\") pod \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " Jan 23 09:29:15 crc kubenswrapper[4982]: I0123 09:29:15.991804 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s25gm\" (UniqueName: \"kubernetes.io/projected/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-kube-api-access-s25gm\") pod \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " Jan 23 09:29:15 crc kubenswrapper[4982]: I0123 09:29:15.991841 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-combined-ca-bundle\") pod \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\" (UID: \"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc\") " Jan 23 09:29:15 crc kubenswrapper[4982]: I0123 09:29:15.999152 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-kube-api-access-s25gm" (OuterVolumeSpecName: "kube-api-access-s25gm") pod "26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" (UID: "26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc"). InnerVolumeSpecName "kube-api-access-s25gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.000814 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" (UID: "26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.034852 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" (UID: "26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.093672 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.093707 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s25gm\" (UniqueName: \"kubernetes.io/projected/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-kube-api-access-s25gm\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.093717 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.508195 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-j6bdp" event={"ID":"26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc","Type":"ContainerDied","Data":"6ad77c4d3d755b63e75006bb6d68474d723c7542e0b192e681fdb647bcd5e24e"} Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.508247 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad77c4d3d755b63e75006bb6d68474d723c7542e0b192e681fdb647bcd5e24e" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.508451 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-j6bdp" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.729133 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-647fc4b846-h6tq7"] Jan 23 09:29:16 crc kubenswrapper[4982]: E0123 09:29:16.729546 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" containerName="barbican-db-sync" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.729563 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" containerName="barbican-db-sync" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.729778 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" containerName="barbican-db-sync" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.730746 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.738031 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.738116 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.739190 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-q6zk8" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.744195 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-586bb97bb5-gvcbp"] Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.746086 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.748330 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.759579 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-586bb97bb5-gvcbp"] Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.787740 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-647fc4b846-h6tq7"] Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.848070 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58444476f9-xzshk"] Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.851266 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.867356 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58444476f9-xzshk"] Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.912276 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-config-data-custom\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.912330 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-dns-svc\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.912462 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-combined-ca-bundle\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.912520 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-config\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.912568 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722ms\" (UniqueName: \"kubernetes.io/projected/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-kube-api-access-722ms\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.912601 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhsn9\" (UniqueName: \"kubernetes.io/projected/109f3866-c341-4f87-afca-28d7bcc7d820-kube-api-access-bhsn9\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.912939 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-config-data\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.913840 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-sb\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.913909 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-combined-ca-bundle\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.913933 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-config-data-custom\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.914063 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc042064-7f87-4037-b952-4d1140d3e85b-logs\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.914183 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-nb\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.914257 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-config-data\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.914546 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh6mm\" (UniqueName: \"kubernetes.io/projected/dc042064-7f87-4037-b952-4d1140d3e85b-kube-api-access-rh6mm\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.914657 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-logs\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.952444 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58dbc55d4d-pmssc"] Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.954886 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.958020 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 09:29:16 crc kubenswrapper[4982]: I0123 09:29:16.970605 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dbc55d4d-pmssc"] Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.017007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhsn9\" (UniqueName: \"kubernetes.io/projected/109f3866-c341-4f87-afca-28d7bcc7d820-kube-api-access-bhsn9\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.017685 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722ms\" (UniqueName: \"kubernetes.io/projected/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-kube-api-access-722ms\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.017814 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fq4\" (UniqueName: \"kubernetes.io/projected/e8b71f02-48e0-427e-a87d-91447f96086d-kube-api-access-59fq4\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.017933 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-config-data\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018049 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-sb\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-combined-ca-bundle\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018247 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-combined-ca-bundle\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018322 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-config-data-custom\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018388 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b71f02-48e0-427e-a87d-91447f96086d-logs\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018465 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc042064-7f87-4037-b952-4d1140d3e85b-logs\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018562 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-nb\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.018660 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-config-data\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.019194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh6mm\" (UniqueName: \"kubernetes.io/projected/dc042064-7f87-4037-b952-4d1140d3e85b-kube-api-access-rh6mm\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.019300 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-logs\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.019394 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-dns-svc\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.019469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-config-data-custom\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.019588 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-combined-ca-bundle\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.019986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.020052 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc042064-7f87-4037-b952-4d1140d3e85b-logs\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.020163 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-config\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.020262 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data-custom\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.020824 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-sb\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.020824 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-nb\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.020843 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-config\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.021215 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-logs\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.021371 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-dns-svc\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.022534 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-config-data-custom\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.024975 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-config-data-custom\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.025682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-combined-ca-bundle\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.027389 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-config-data\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.027406 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-combined-ca-bundle\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.027904 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc042064-7f87-4037-b952-4d1140d3e85b-config-data\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.041167 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhsn9\" (UniqueName: \"kubernetes.io/projected/109f3866-c341-4f87-afca-28d7bcc7d820-kube-api-access-bhsn9\") pod \"dnsmasq-dns-58444476f9-xzshk\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.041597 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh6mm\" (UniqueName: \"kubernetes.io/projected/dc042064-7f87-4037-b952-4d1140d3e85b-kube-api-access-rh6mm\") pod \"barbican-keystone-listener-647fc4b846-h6tq7\" (UID: \"dc042064-7f87-4037-b952-4d1140d3e85b\") " pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.043452 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722ms\" (UniqueName: \"kubernetes.io/projected/48a0ed35-6d01-410a-9ed1-ff102bdc0afc-kube-api-access-722ms\") pod \"barbican-worker-586bb97bb5-gvcbp\" (UID: \"48a0ed35-6d01-410a-9ed1-ff102bdc0afc\") " pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.052123 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.074247 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-586bb97bb5-gvcbp" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.122593 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.122671 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data-custom\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.122731 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fq4\" (UniqueName: \"kubernetes.io/projected/e8b71f02-48e0-427e-a87d-91447f96086d-kube-api-access-59fq4\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.122779 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-combined-ca-bundle\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.122810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b71f02-48e0-427e-a87d-91447f96086d-logs\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.123354 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b71f02-48e0-427e-a87d-91447f96086d-logs\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.127461 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data-custom\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.141919 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.142565 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-combined-ca-bundle\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.144644 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fq4\" (UniqueName: \"kubernetes.io/projected/e8b71f02-48e0-427e-a87d-91447f96086d-kube-api-access-59fq4\") pod \"barbican-api-58dbc55d4d-pmssc\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.176032 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.281137 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.564095 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58444476f9-xzshk"] Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.582389 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-647fc4b846-h6tq7"] Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.598919 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dbc55d4d-pmssc"] Jan 23 09:29:17 crc kubenswrapper[4982]: W0123 09:29:17.602487 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc042064_7f87_4037_b952_4d1140d3e85b.slice/crio-1ed7d09dd6b4c5b0faefcb8d8ecf8c034ba2653ac4f53a29bee50da146e4d2f2 WatchSource:0}: Error finding container 1ed7d09dd6b4c5b0faefcb8d8ecf8c034ba2653ac4f53a29bee50da146e4d2f2: Status 404 returned error can't find the container with id 1ed7d09dd6b4c5b0faefcb8d8ecf8c034ba2653ac4f53a29bee50da146e4d2f2 Jan 23 09:29:17 crc kubenswrapper[4982]: I0123 09:29:17.673985 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-586bb97bb5-gvcbp"] Jan 23 09:29:17 crc kubenswrapper[4982]: W0123 09:29:17.699831 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a0ed35_6d01_410a_9ed1_ff102bdc0afc.slice/crio-459e22ecd78fa46a8026a0dee5eec2dde7f5e185141d72f2b53aa327017328de WatchSource:0}: Error finding container 459e22ecd78fa46a8026a0dee5eec2dde7f5e185141d72f2b53aa327017328de: Status 404 returned error can't find the container with id 459e22ecd78fa46a8026a0dee5eec2dde7f5e185141d72f2b53aa327017328de Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.555227 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" event={"ID":"dc042064-7f87-4037-b952-4d1140d3e85b","Type":"ContainerStarted","Data":"6d7864153929cdefa03a5ea42717e9228cfe51fe2498496c715a64cb7128b5b9"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.555762 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" event={"ID":"dc042064-7f87-4037-b952-4d1140d3e85b","Type":"ContainerStarted","Data":"928c124e3a350eeb7c0fb8e2e57deff8c0abc67250ff3882f8ff60bc08ca161f"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.555776 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" event={"ID":"dc042064-7f87-4037-b952-4d1140d3e85b","Type":"ContainerStarted","Data":"1ed7d09dd6b4c5b0faefcb8d8ecf8c034ba2653ac4f53a29bee50da146e4d2f2"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.570853 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-586bb97bb5-gvcbp" event={"ID":"48a0ed35-6d01-410a-9ed1-ff102bdc0afc","Type":"ContainerStarted","Data":"070d3084524f5e4c9f8d35c8fbc2da55dbb980841c947a4aab457a9a6578d1cf"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.570896 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-586bb97bb5-gvcbp" event={"ID":"48a0ed35-6d01-410a-9ed1-ff102bdc0afc","Type":"ContainerStarted","Data":"40b22fa16b7df7ddebafd15ce73f4ac798d0c1460783e59172444e4957b8dba1"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.570906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-586bb97bb5-gvcbp" event={"ID":"48a0ed35-6d01-410a-9ed1-ff102bdc0afc","Type":"ContainerStarted","Data":"459e22ecd78fa46a8026a0dee5eec2dde7f5e185141d72f2b53aa327017328de"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.581335 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dbc55d4d-pmssc" event={"ID":"e8b71f02-48e0-427e-a87d-91447f96086d","Type":"ContainerStarted","Data":"4977ea9693542ec472ac61b457d4ba92335e168ff0919d52070665fedd002a94"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.581702 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dbc55d4d-pmssc" event={"ID":"e8b71f02-48e0-427e-a87d-91447f96086d","Type":"ContainerStarted","Data":"49308d4230ee2b318d2823bb240708a514aede1c353b169be2c356dea7143bad"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.581716 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dbc55d4d-pmssc" event={"ID":"e8b71f02-48e0-427e-a87d-91447f96086d","Type":"ContainerStarted","Data":"bfd3d5f98e36422b435ae3061933920011ad1878506cb806f194a864c8b3e0eb"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.582719 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.582744 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.587029 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-647fc4b846-h6tq7" podStartSLOduration=2.587004612 podStartE2EDuration="2.587004612s" podCreationTimestamp="2026-01-23 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:18.580291499 +0000 UTC m=+5633.060654885" watchObservedRunningTime="2026-01-23 09:29:18.587004612 +0000 UTC m=+5633.067367998" Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.591389 4982 generic.go:334] "Generic (PLEG): container finished" podID="109f3866-c341-4f87-afca-28d7bcc7d820" containerID="78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703" exitCode=0 Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.591427 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58444476f9-xzshk" event={"ID":"109f3866-c341-4f87-afca-28d7bcc7d820","Type":"ContainerDied","Data":"78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.591448 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58444476f9-xzshk" event={"ID":"109f3866-c341-4f87-afca-28d7bcc7d820","Type":"ContainerStarted","Data":"30b17c7222b9ee869dfd4d0b89133a587a44ef122b7d3416cda34a9f859a019f"} Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.617592 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-586bb97bb5-gvcbp" podStartSLOduration=2.617573212 podStartE2EDuration="2.617573212s" podCreationTimestamp="2026-01-23 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:18.606887522 +0000 UTC m=+5633.087250908" watchObservedRunningTime="2026-01-23 09:29:18.617573212 +0000 UTC m=+5633.097936598" Jan 23 09:29:18 crc kubenswrapper[4982]: I0123 09:29:18.694022 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58dbc55d4d-pmssc" podStartSLOduration=2.6940001479999998 podStartE2EDuration="2.694000148s" podCreationTimestamp="2026-01-23 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:18.662135502 +0000 UTC m=+5633.142498888" watchObservedRunningTime="2026-01-23 09:29:18.694000148 +0000 UTC m=+5633.174363534" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.387050 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bf64c5f48-8qmjw"] Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.389571 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.392104 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.393497 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.399855 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf64c5f48-8qmjw"] Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.476927 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-internal-tls-certs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.477015 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t5s2\" (UniqueName: \"kubernetes.io/projected/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-kube-api-access-2t5s2\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.477057 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-config-data\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.477161 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-logs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.477311 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-config-data-custom\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.477398 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-combined-ca-bundle\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.477569 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-public-tls-certs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.579810 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-public-tls-certs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.579890 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-internal-tls-certs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.579946 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t5s2\" (UniqueName: \"kubernetes.io/projected/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-kube-api-access-2t5s2\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.579987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-config-data\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.580065 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-logs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.580155 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-config-data-custom\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.580192 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-combined-ca-bundle\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.581345 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-logs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.584781 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-internal-tls-certs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.584910 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-config-data-custom\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.585187 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-public-tls-certs\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.585707 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-config-data\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.590054 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-combined-ca-bundle\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.605980 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58444476f9-xzshk" event={"ID":"109f3866-c341-4f87-afca-28d7bcc7d820","Type":"ContainerStarted","Data":"049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec"} Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.606574 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t5s2\" (UniqueName: \"kubernetes.io/projected/f12b3a0a-3ce2-43ff-892b-91f50f5268a6-kube-api-access-2t5s2\") pod \"barbican-api-bf64c5f48-8qmjw\" (UID: \"f12b3a0a-3ce2-43ff-892b-91f50f5268a6\") " pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.625068 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58444476f9-xzshk" podStartSLOduration=3.625045796 podStartE2EDuration="3.625045796s" podCreationTimestamp="2026-01-23 09:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:19.623167925 +0000 UTC m=+5634.103531331" watchObservedRunningTime="2026-01-23 09:29:19.625045796 +0000 UTC m=+5634.105409182" Jan 23 09:29:19 crc kubenswrapper[4982]: I0123 09:29:19.710051 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:20 crc kubenswrapper[4982]: I0123 09:29:20.208145 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bf64c5f48-8qmjw"] Jan 23 09:29:20 crc kubenswrapper[4982]: W0123 09:29:20.218777 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf12b3a0a_3ce2_43ff_892b_91f50f5268a6.slice/crio-c2ac5604c449550230bf8e55d081dbce5e0122775ee2f933616553b245322504 WatchSource:0}: Error finding container c2ac5604c449550230bf8e55d081dbce5e0122775ee2f933616553b245322504: Status 404 returned error can't find the container with id c2ac5604c449550230bf8e55d081dbce5e0122775ee2f933616553b245322504 Jan 23 09:29:20 crc kubenswrapper[4982]: I0123 09:29:20.620905 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf64c5f48-8qmjw" event={"ID":"f12b3a0a-3ce2-43ff-892b-91f50f5268a6","Type":"ContainerStarted","Data":"6509fd81164dcf12896e35b96719ca269f3f833eb32473abc1bf128a4277c79c"} Jan 23 09:29:20 crc kubenswrapper[4982]: I0123 09:29:20.634243 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf64c5f48-8qmjw" event={"ID":"f12b3a0a-3ce2-43ff-892b-91f50f5268a6","Type":"ContainerStarted","Data":"c2ac5604c449550230bf8e55d081dbce5e0122775ee2f933616553b245322504"} Jan 23 09:29:20 crc kubenswrapper[4982]: I0123 09:29:20.634333 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:21 crc kubenswrapper[4982]: I0123 09:29:21.630258 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bf64c5f48-8qmjw" event={"ID":"f12b3a0a-3ce2-43ff-892b-91f50f5268a6","Type":"ContainerStarted","Data":"5134ff464cc5c0e6e6b546dcb6b1face35cec94dffbc1b256fce3fb2d8422965"} Jan 23 09:29:21 crc kubenswrapper[4982]: I0123 09:29:21.654312 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bf64c5f48-8qmjw" podStartSLOduration=2.654288331 podStartE2EDuration="2.654288331s" podCreationTimestamp="2026-01-23 09:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:21.647095286 +0000 UTC m=+5636.127458672" watchObservedRunningTime="2026-01-23 09:29:21.654288331 +0000 UTC m=+5636.134651717" Jan 23 09:29:22 crc kubenswrapper[4982]: I0123 09:29:22.648536 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:22 crc kubenswrapper[4982]: I0123 09:29:22.648599 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:23 crc kubenswrapper[4982]: I0123 09:29:23.847084 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:25 crc kubenswrapper[4982]: I0123 09:29:25.223323 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:27 crc kubenswrapper[4982]: I0123 09:29:27.178571 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:29:27 crc kubenswrapper[4982]: I0123 09:29:27.259334 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6b47f88c-qnnz4"] Jan 23 09:29:27 crc kubenswrapper[4982]: I0123 09:29:27.259686 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" podUID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerName="dnsmasq-dns" containerID="cri-o://4fd3c8a55d054f8fc261924ffd7b6baf0e733d5f9a17e6c822925a264108665a" gracePeriod=10 Jan 23 09:29:27 crc kubenswrapper[4982]: I0123 09:29:27.707560 4982 generic.go:334] "Generic (PLEG): container finished" podID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerID="4fd3c8a55d054f8fc261924ffd7b6baf0e733d5f9a17e6c822925a264108665a" exitCode=0 Jan 23 09:29:27 crc kubenswrapper[4982]: I0123 09:29:27.707696 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" event={"ID":"f945ebb2-291c-4762-9c1c-7fdde85c6402","Type":"ContainerDied","Data":"4fd3c8a55d054f8fc261924ffd7b6baf0e733d5f9a17e6c822925a264108665a"} Jan 23 09:29:27 crc kubenswrapper[4982]: I0123 09:29:27.874869 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.074031 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-nb\") pod \"f945ebb2-291c-4762-9c1c-7fdde85c6402\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.074100 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-dns-svc\") pod \"f945ebb2-291c-4762-9c1c-7fdde85c6402\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.074127 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl7pf\" (UniqueName: \"kubernetes.io/projected/f945ebb2-291c-4762-9c1c-7fdde85c6402-kube-api-access-xl7pf\") pod \"f945ebb2-291c-4762-9c1c-7fdde85c6402\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.074155 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-sb\") pod \"f945ebb2-291c-4762-9c1c-7fdde85c6402\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.074178 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-config\") pod \"f945ebb2-291c-4762-9c1c-7fdde85c6402\" (UID: \"f945ebb2-291c-4762-9c1c-7fdde85c6402\") " Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.082330 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f945ebb2-291c-4762-9c1c-7fdde85c6402-kube-api-access-xl7pf" (OuterVolumeSpecName: "kube-api-access-xl7pf") pod "f945ebb2-291c-4762-9c1c-7fdde85c6402" (UID: "f945ebb2-291c-4762-9c1c-7fdde85c6402"). InnerVolumeSpecName "kube-api-access-xl7pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.123214 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f945ebb2-291c-4762-9c1c-7fdde85c6402" (UID: "f945ebb2-291c-4762-9c1c-7fdde85c6402"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.124510 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f945ebb2-291c-4762-9c1c-7fdde85c6402" (UID: "f945ebb2-291c-4762-9c1c-7fdde85c6402"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.126949 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f945ebb2-291c-4762-9c1c-7fdde85c6402" (UID: "f945ebb2-291c-4762-9c1c-7fdde85c6402"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.128445 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-config" (OuterVolumeSpecName: "config") pod "f945ebb2-291c-4762-9c1c-7fdde85c6402" (UID: "f945ebb2-291c-4762-9c1c-7fdde85c6402"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.175811 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.175862 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.175877 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl7pf\" (UniqueName: \"kubernetes.io/projected/f945ebb2-291c-4762-9c1c-7fdde85c6402-kube-api-access-xl7pf\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.175893 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.175965 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f945ebb2-291c-4762-9c1c-7fdde85c6402-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.722337 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" event={"ID":"f945ebb2-291c-4762-9c1c-7fdde85c6402","Type":"ContainerDied","Data":"c272c2e316ea98f700a6a3c3f56a228e73f52bc228749785a622c89ee1e16c43"} Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.722676 4982 scope.go:117] "RemoveContainer" containerID="4fd3c8a55d054f8fc261924ffd7b6baf0e733d5f9a17e6c822925a264108665a" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.722577 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d6b47f88c-qnnz4" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.746015 4982 scope.go:117] "RemoveContainer" containerID="c11ed8f5a8559667841199fea6e7a8bea2716abcc303ee172e520a3d3e188257" Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.767480 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d6b47f88c-qnnz4"] Jan 23 09:29:28 crc kubenswrapper[4982]: I0123 09:29:28.787974 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d6b47f88c-qnnz4"] Jan 23 09:29:29 crc kubenswrapper[4982]: I0123 09:29:29.839583 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f945ebb2-291c-4762-9c1c-7fdde85c6402" path="/var/lib/kubelet/pods/f945ebb2-291c-4762-9c1c-7fdde85c6402/volumes" Jan 23 09:29:31 crc kubenswrapper[4982]: I0123 09:29:31.148122 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:31 crc kubenswrapper[4982]: I0123 09:29:31.262971 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bf64c5f48-8qmjw" Jan 23 09:29:31 crc kubenswrapper[4982]: I0123 09:29:31.336164 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58dbc55d4d-pmssc"] Jan 23 09:29:31 crc kubenswrapper[4982]: I0123 09:29:31.336386 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58dbc55d4d-pmssc" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api-log" containerID="cri-o://49308d4230ee2b318d2823bb240708a514aede1c353b169be2c356dea7143bad" gracePeriod=30 Jan 23 09:29:31 crc kubenswrapper[4982]: I0123 09:29:31.336839 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58dbc55d4d-pmssc" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api" containerID="cri-o://4977ea9693542ec472ac61b457d4ba92335e168ff0919d52070665fedd002a94" gracePeriod=30 Jan 23 09:29:31 crc kubenswrapper[4982]: I0123 09:29:31.755990 4982 generic.go:334] "Generic (PLEG): container finished" podID="e8b71f02-48e0-427e-a87d-91447f96086d" containerID="49308d4230ee2b318d2823bb240708a514aede1c353b169be2c356dea7143bad" exitCode=143 Jan 23 09:29:31 crc kubenswrapper[4982]: I0123 09:29:31.756071 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dbc55d4d-pmssc" event={"ID":"e8b71f02-48e0-427e-a87d-91447f96086d","Type":"ContainerDied","Data":"49308d4230ee2b318d2823bb240708a514aede1c353b169be2c356dea7143bad"} Jan 23 09:29:34 crc kubenswrapper[4982]: I0123 09:29:34.484862 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dbc55d4d-pmssc" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.53:9311/healthcheck\": read tcp 10.217.0.2:52934->10.217.1.53:9311: read: connection reset by peer" Jan 23 09:29:34 crc kubenswrapper[4982]: I0123 09:29:34.484918 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58dbc55d4d-pmssc" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.53:9311/healthcheck\": read tcp 10.217.0.2:52932->10.217.1.53:9311: read: connection reset by peer" Jan 23 09:29:34 crc kubenswrapper[4982]: I0123 09:29:34.783541 4982 generic.go:334] "Generic (PLEG): container finished" podID="e8b71f02-48e0-427e-a87d-91447f96086d" containerID="4977ea9693542ec472ac61b457d4ba92335e168ff0919d52070665fedd002a94" exitCode=0 Jan 23 09:29:34 crc kubenswrapper[4982]: I0123 09:29:34.783594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dbc55d4d-pmssc" event={"ID":"e8b71f02-48e0-427e-a87d-91447f96086d","Type":"ContainerDied","Data":"4977ea9693542ec472ac61b457d4ba92335e168ff0919d52070665fedd002a94"} Jan 23 09:29:34 crc kubenswrapper[4982]: I0123 09:29:34.985207 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.108436 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data\") pod \"e8b71f02-48e0-427e-a87d-91447f96086d\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.108945 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-combined-ca-bundle\") pod \"e8b71f02-48e0-427e-a87d-91447f96086d\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.109096 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b71f02-48e0-427e-a87d-91447f96086d-logs\") pod \"e8b71f02-48e0-427e-a87d-91447f96086d\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.109210 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data-custom\") pod \"e8b71f02-48e0-427e-a87d-91447f96086d\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.109362 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59fq4\" (UniqueName: \"kubernetes.io/projected/e8b71f02-48e0-427e-a87d-91447f96086d-kube-api-access-59fq4\") pod \"e8b71f02-48e0-427e-a87d-91447f96086d\" (UID: \"e8b71f02-48e0-427e-a87d-91447f96086d\") " Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.109810 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b71f02-48e0-427e-a87d-91447f96086d-logs" (OuterVolumeSpecName: "logs") pod "e8b71f02-48e0-427e-a87d-91447f96086d" (UID: "e8b71f02-48e0-427e-a87d-91447f96086d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.111424 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b71f02-48e0-427e-a87d-91447f96086d-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.115219 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e8b71f02-48e0-427e-a87d-91447f96086d" (UID: "e8b71f02-48e0-427e-a87d-91447f96086d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.115406 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b71f02-48e0-427e-a87d-91447f96086d-kube-api-access-59fq4" (OuterVolumeSpecName: "kube-api-access-59fq4") pod "e8b71f02-48e0-427e-a87d-91447f96086d" (UID: "e8b71f02-48e0-427e-a87d-91447f96086d"). InnerVolumeSpecName "kube-api-access-59fq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.142091 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8b71f02-48e0-427e-a87d-91447f96086d" (UID: "e8b71f02-48e0-427e-a87d-91447f96086d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.173618 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data" (OuterVolumeSpecName: "config-data") pod "e8b71f02-48e0-427e-a87d-91447f96086d" (UID: "e8b71f02-48e0-427e-a87d-91447f96086d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.212512 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.212552 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.212565 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e8b71f02-48e0-427e-a87d-91447f96086d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.212579 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59fq4\" (UniqueName: \"kubernetes.io/projected/e8b71f02-48e0-427e-a87d-91447f96086d-kube-api-access-59fq4\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.794289 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dbc55d4d-pmssc" event={"ID":"e8b71f02-48e0-427e-a87d-91447f96086d","Type":"ContainerDied","Data":"bfd3d5f98e36422b435ae3061933920011ad1878506cb806f194a864c8b3e0eb"} Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.794349 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dbc55d4d-pmssc" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.794639 4982 scope.go:117] "RemoveContainer" containerID="4977ea9693542ec472ac61b457d4ba92335e168ff0919d52070665fedd002a94" Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.861532 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58dbc55d4d-pmssc"] Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.873149 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58dbc55d4d-pmssc"] Jan 23 09:29:35 crc kubenswrapper[4982]: I0123 09:29:35.874792 4982 scope.go:117] "RemoveContainer" containerID="49308d4230ee2b318d2823bb240708a514aede1c353b169be2c356dea7143bad" Jan 23 09:29:37 crc kubenswrapper[4982]: I0123 09:29:37.841370 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" path="/var/lib/kubelet/pods/e8b71f02-48e0-427e-a87d-91447f96086d/volumes" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.144525 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bh2x9"] Jan 23 09:29:53 crc kubenswrapper[4982]: E0123 09:29:53.145484 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api-log" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.145501 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api-log" Jan 23 09:29:53 crc kubenswrapper[4982]: E0123 09:29:53.145539 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.145551 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api" Jan 23 09:29:53 crc kubenswrapper[4982]: E0123 09:29:53.145566 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerName="init" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.145573 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerName="init" Jan 23 09:29:53 crc kubenswrapper[4982]: E0123 09:29:53.145578 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerName="dnsmasq-dns" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.145584 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerName="dnsmasq-dns" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.145802 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.145821 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f945ebb2-291c-4762-9c1c-7fdde85c6402" containerName="dnsmasq-dns" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.145839 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b71f02-48e0-427e-a87d-91447f96086d" containerName="barbican-api-log" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.146568 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.157216 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bh2x9"] Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.251721 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-82a9-account-create-update-qjj6n"] Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.253062 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.260158 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.263834 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-82a9-account-create-update-qjj6n"] Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.280824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlfrc\" (UniqueName: \"kubernetes.io/projected/8171e575-adcc-47d2-b4c4-bdea3cdabe65-kube-api-access-dlfrc\") pod \"neutron-db-create-bh2x9\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.280998 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8171e575-adcc-47d2-b4c4-bdea3cdabe65-operator-scripts\") pod \"neutron-db-create-bh2x9\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.382569 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlfrc\" (UniqueName: \"kubernetes.io/projected/8171e575-adcc-47d2-b4c4-bdea3cdabe65-kube-api-access-dlfrc\") pod \"neutron-db-create-bh2x9\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.382641 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msssn\" (UniqueName: \"kubernetes.io/projected/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-kube-api-access-msssn\") pod \"neutron-82a9-account-create-update-qjj6n\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.382744 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-operator-scripts\") pod \"neutron-82a9-account-create-update-qjj6n\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.382815 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8171e575-adcc-47d2-b4c4-bdea3cdabe65-operator-scripts\") pod \"neutron-db-create-bh2x9\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.383644 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8171e575-adcc-47d2-b4c4-bdea3cdabe65-operator-scripts\") pod \"neutron-db-create-bh2x9\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.402510 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlfrc\" (UniqueName: \"kubernetes.io/projected/8171e575-adcc-47d2-b4c4-bdea3cdabe65-kube-api-access-dlfrc\") pod \"neutron-db-create-bh2x9\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.463815 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.483994 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msssn\" (UniqueName: \"kubernetes.io/projected/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-kube-api-access-msssn\") pod \"neutron-82a9-account-create-update-qjj6n\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.484087 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-operator-scripts\") pod \"neutron-82a9-account-create-update-qjj6n\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.485020 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-operator-scripts\") pod \"neutron-82a9-account-create-update-qjj6n\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.505512 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msssn\" (UniqueName: \"kubernetes.io/projected/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-kube-api-access-msssn\") pod \"neutron-82a9-account-create-update-qjj6n\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.574522 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.875145 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-82a9-account-create-update-qjj6n"] Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.981017 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bh2x9"] Jan 23 09:29:53 crc kubenswrapper[4982]: W0123 09:29:53.991123 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8171e575_adcc_47d2_b4c4_bdea3cdabe65.slice/crio-bb7c1e23f159ef7cb2b7c796d3bd741db8edbb2e515410ef9a9edc3644570d1e WatchSource:0}: Error finding container bb7c1e23f159ef7cb2b7c796d3bd741db8edbb2e515410ef9a9edc3644570d1e: Status 404 returned error can't find the container with id bb7c1e23f159ef7cb2b7c796d3bd741db8edbb2e515410ef9a9edc3644570d1e Jan 23 09:29:53 crc kubenswrapper[4982]: I0123 09:29:53.992506 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-82a9-account-create-update-qjj6n" event={"ID":"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76","Type":"ContainerStarted","Data":"36ed42e5b0b021ea58a3d05a8176f344a611e91c6c8919f48dfca686331bc5c0"} Jan 23 09:29:55 crc kubenswrapper[4982]: I0123 09:29:55.003084 4982 generic.go:334] "Generic (PLEG): container finished" podID="8171e575-adcc-47d2-b4c4-bdea3cdabe65" containerID="0473bc43df2939734ecf07803266c09009442222e3aa30024717e3403b6baf49" exitCode=0 Jan 23 09:29:55 crc kubenswrapper[4982]: I0123 09:29:55.003238 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bh2x9" event={"ID":"8171e575-adcc-47d2-b4c4-bdea3cdabe65","Type":"ContainerDied","Data":"0473bc43df2939734ecf07803266c09009442222e3aa30024717e3403b6baf49"} Jan 23 09:29:55 crc kubenswrapper[4982]: I0123 09:29:55.003486 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bh2x9" event={"ID":"8171e575-adcc-47d2-b4c4-bdea3cdabe65","Type":"ContainerStarted","Data":"bb7c1e23f159ef7cb2b7c796d3bd741db8edbb2e515410ef9a9edc3644570d1e"} Jan 23 09:29:55 crc kubenswrapper[4982]: I0123 09:29:55.004757 4982 generic.go:334] "Generic (PLEG): container finished" podID="5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76" containerID="760de6f158aa466391aa0d4bfcf4f903d21f9c85bc52bb4e4f8bf2a362d4efd6" exitCode=0 Jan 23 09:29:55 crc kubenswrapper[4982]: I0123 09:29:55.004781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-82a9-account-create-update-qjj6n" event={"ID":"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76","Type":"ContainerDied","Data":"760de6f158aa466391aa0d4bfcf4f903d21f9c85bc52bb4e4f8bf2a362d4efd6"} Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.433032 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.440704 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.563584 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-operator-scripts\") pod \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.563710 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msssn\" (UniqueName: \"kubernetes.io/projected/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-kube-api-access-msssn\") pod \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\" (UID: \"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76\") " Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.563865 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlfrc\" (UniqueName: \"kubernetes.io/projected/8171e575-adcc-47d2-b4c4-bdea3cdabe65-kube-api-access-dlfrc\") pod \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.563888 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8171e575-adcc-47d2-b4c4-bdea3cdabe65-operator-scripts\") pod \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\" (UID: \"8171e575-adcc-47d2-b4c4-bdea3cdabe65\") " Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.564206 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76" (UID: "5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.564492 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171e575-adcc-47d2-b4c4-bdea3cdabe65-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8171e575-adcc-47d2-b4c4-bdea3cdabe65" (UID: "8171e575-adcc-47d2-b4c4-bdea3cdabe65"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.569592 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-kube-api-access-msssn" (OuterVolumeSpecName: "kube-api-access-msssn") pod "5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76" (UID: "5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76"). InnerVolumeSpecName "kube-api-access-msssn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.569924 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8171e575-adcc-47d2-b4c4-bdea3cdabe65-kube-api-access-dlfrc" (OuterVolumeSpecName: "kube-api-access-dlfrc") pod "8171e575-adcc-47d2-b4c4-bdea3cdabe65" (UID: "8171e575-adcc-47d2-b4c4-bdea3cdabe65"). InnerVolumeSpecName "kube-api-access-dlfrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.665940 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlfrc\" (UniqueName: \"kubernetes.io/projected/8171e575-adcc-47d2-b4c4-bdea3cdabe65-kube-api-access-dlfrc\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.665976 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8171e575-adcc-47d2-b4c4-bdea3cdabe65-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.665986 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:56 crc kubenswrapper[4982]: I0123 09:29:56.665995 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msssn\" (UniqueName: \"kubernetes.io/projected/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76-kube-api-access-msssn\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:57 crc kubenswrapper[4982]: I0123 09:29:57.026927 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-82a9-account-create-update-qjj6n" event={"ID":"5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76","Type":"ContainerDied","Data":"36ed42e5b0b021ea58a3d05a8176f344a611e91c6c8919f48dfca686331bc5c0"} Jan 23 09:29:57 crc kubenswrapper[4982]: I0123 09:29:57.026972 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ed42e5b0b021ea58a3d05a8176f344a611e91c6c8919f48dfca686331bc5c0" Jan 23 09:29:57 crc kubenswrapper[4982]: I0123 09:29:57.026945 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-82a9-account-create-update-qjj6n" Jan 23 09:29:57 crc kubenswrapper[4982]: I0123 09:29:57.029128 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bh2x9" event={"ID":"8171e575-adcc-47d2-b4c4-bdea3cdabe65","Type":"ContainerDied","Data":"bb7c1e23f159ef7cb2b7c796d3bd741db8edbb2e515410ef9a9edc3644570d1e"} Jan 23 09:29:57 crc kubenswrapper[4982]: I0123 09:29:57.029171 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb7c1e23f159ef7cb2b7c796d3bd741db8edbb2e515410ef9a9edc3644570d1e" Jan 23 09:29:57 crc kubenswrapper[4982]: I0123 09:29:57.029148 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bh2x9" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.526208 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gjzdw"] Jan 23 09:29:58 crc kubenswrapper[4982]: E0123 09:29:58.527052 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76" containerName="mariadb-account-create-update" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.527073 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76" containerName="mariadb-account-create-update" Jan 23 09:29:58 crc kubenswrapper[4982]: E0123 09:29:58.527098 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8171e575-adcc-47d2-b4c4-bdea3cdabe65" containerName="mariadb-database-create" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.527108 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8171e575-adcc-47d2-b4c4-bdea3cdabe65" containerName="mariadb-database-create" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.527333 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8171e575-adcc-47d2-b4c4-bdea3cdabe65" containerName="mariadb-database-create" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.527362 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76" containerName="mariadb-account-create-update" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.528148 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.536665 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.536725 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.536665 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-65zdb" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.537258 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gjzdw"] Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.704153 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-config\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.704237 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5fh8\" (UniqueName: \"kubernetes.io/projected/e739740f-4882-439c-8deb-0b28947ed7d3-kube-api-access-q5fh8\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.704278 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-combined-ca-bundle\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.805855 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5fh8\" (UniqueName: \"kubernetes.io/projected/e739740f-4882-439c-8deb-0b28947ed7d3-kube-api-access-q5fh8\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.805914 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-combined-ca-bundle\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.806064 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-config\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.812887 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-combined-ca-bundle\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.820435 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-config\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.824165 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5fh8\" (UniqueName: \"kubernetes.io/projected/e739740f-4882-439c-8deb-0b28947ed7d3-kube-api-access-q5fh8\") pod \"neutron-db-sync-gjzdw\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:58 crc kubenswrapper[4982]: I0123 09:29:58.852278 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:29:59 crc kubenswrapper[4982]: I0123 09:29:59.088785 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-v52d8"] Jan 23 09:29:59 crc kubenswrapper[4982]: I0123 09:29:59.116826 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-v52d8"] Jan 23 09:29:59 crc kubenswrapper[4982]: I0123 09:29:59.125190 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gjzdw"] Jan 23 09:29:59 crc kubenswrapper[4982]: I0123 09:29:59.859390 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9243b23-c93c-4495-9657-84cf3e077475" path="/var/lib/kubelet/pods/a9243b23-c93c-4495-9657-84cf3e077475/volumes" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.057979 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gjzdw" event={"ID":"e739740f-4882-439c-8deb-0b28947ed7d3","Type":"ContainerStarted","Data":"9df2b43aaa2efef4ab3060ea5bcadbd4f0dc99e1d54da103d27cd9aed6760821"} Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.058018 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gjzdw" event={"ID":"e739740f-4882-439c-8deb-0b28947ed7d3","Type":"ContainerStarted","Data":"5fe180a16391813021999026f2e6e195bca8d1b89db44119f3ee333f06d4ce52"} Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.083130 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gjzdw" podStartSLOduration=2.083109645 podStartE2EDuration="2.083109645s" podCreationTimestamp="2026-01-23 09:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:00.073992097 +0000 UTC m=+5674.554355483" watchObservedRunningTime="2026-01-23 09:30:00.083109645 +0000 UTC m=+5674.563473041" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.138055 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj"] Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.139959 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.142598 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.142887 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.148442 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj"] Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.242592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8xzg\" (UniqueName: \"kubernetes.io/projected/f2d65805-2205-4a3c-8333-631881106239-kube-api-access-m8xzg\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.242818 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2d65805-2205-4a3c-8333-631881106239-config-volume\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.243041 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2d65805-2205-4a3c-8333-631881106239-secret-volume\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.344358 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2d65805-2205-4a3c-8333-631881106239-secret-volume\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.344474 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8xzg\" (UniqueName: \"kubernetes.io/projected/f2d65805-2205-4a3c-8333-631881106239-kube-api-access-m8xzg\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.344523 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2d65805-2205-4a3c-8333-631881106239-config-volume\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.345539 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2d65805-2205-4a3c-8333-631881106239-config-volume\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.363324 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2d65805-2205-4a3c-8333-631881106239-secret-volume\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.366182 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8xzg\" (UniqueName: \"kubernetes.io/projected/f2d65805-2205-4a3c-8333-631881106239-kube-api-access-m8xzg\") pod \"collect-profiles-29486010-zb4jj\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.469214 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:00 crc kubenswrapper[4982]: I0123 09:30:00.956134 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj"] Jan 23 09:30:01 crc kubenswrapper[4982]: I0123 09:30:01.067317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" event={"ID":"f2d65805-2205-4a3c-8333-631881106239","Type":"ContainerStarted","Data":"cf6f101732147b952ad124b45504df71eba5eab48563afa10397abc52d303245"} Jan 23 09:30:02 crc kubenswrapper[4982]: I0123 09:30:02.079427 4982 generic.go:334] "Generic (PLEG): container finished" podID="f2d65805-2205-4a3c-8333-631881106239" containerID="412447ef59a5fd8b22edc889f65568166836917f2a330c816a38c55e2a51cca1" exitCode=0 Jan 23 09:30:02 crc kubenswrapper[4982]: I0123 09:30:02.079479 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" event={"ID":"f2d65805-2205-4a3c-8333-631881106239","Type":"ContainerDied","Data":"412447ef59a5fd8b22edc889f65568166836917f2a330c816a38c55e2a51cca1"} Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.363148 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.505810 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2d65805-2205-4a3c-8333-631881106239-config-volume\") pod \"f2d65805-2205-4a3c-8333-631881106239\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.506004 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2d65805-2205-4a3c-8333-631881106239-secret-volume\") pod \"f2d65805-2205-4a3c-8333-631881106239\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.506081 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8xzg\" (UniqueName: \"kubernetes.io/projected/f2d65805-2205-4a3c-8333-631881106239-kube-api-access-m8xzg\") pod \"f2d65805-2205-4a3c-8333-631881106239\" (UID: \"f2d65805-2205-4a3c-8333-631881106239\") " Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.507539 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d65805-2205-4a3c-8333-631881106239-config-volume" (OuterVolumeSpecName: "config-volume") pod "f2d65805-2205-4a3c-8333-631881106239" (UID: "f2d65805-2205-4a3c-8333-631881106239"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.522143 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d65805-2205-4a3c-8333-631881106239-kube-api-access-m8xzg" (OuterVolumeSpecName: "kube-api-access-m8xzg") pod "f2d65805-2205-4a3c-8333-631881106239" (UID: "f2d65805-2205-4a3c-8333-631881106239"). InnerVolumeSpecName "kube-api-access-m8xzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.538796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2d65805-2205-4a3c-8333-631881106239-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f2d65805-2205-4a3c-8333-631881106239" (UID: "f2d65805-2205-4a3c-8333-631881106239"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.617808 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2d65805-2205-4a3c-8333-631881106239-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.617861 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8xzg\" (UniqueName: \"kubernetes.io/projected/f2d65805-2205-4a3c-8333-631881106239-kube-api-access-m8xzg\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:03 crc kubenswrapper[4982]: I0123 09:30:03.617871 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2d65805-2205-4a3c-8333-631881106239-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:04 crc kubenswrapper[4982]: I0123 09:30:04.099394 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" event={"ID":"f2d65805-2205-4a3c-8333-631881106239","Type":"ContainerDied","Data":"cf6f101732147b952ad124b45504df71eba5eab48563afa10397abc52d303245"} Jan 23 09:30:04 crc kubenswrapper[4982]: I0123 09:30:04.101955 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6f101732147b952ad124b45504df71eba5eab48563afa10397abc52d303245" Jan 23 09:30:04 crc kubenswrapper[4982]: I0123 09:30:04.099674 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-zb4jj" Jan 23 09:30:04 crc kubenswrapper[4982]: I0123 09:30:04.439464 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2"] Jan 23 09:30:04 crc kubenswrapper[4982]: I0123 09:30:04.448527 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-c86w2"] Jan 23 09:30:05 crc kubenswrapper[4982]: I0123 09:30:05.109161 4982 generic.go:334] "Generic (PLEG): container finished" podID="e739740f-4882-439c-8deb-0b28947ed7d3" containerID="9df2b43aaa2efef4ab3060ea5bcadbd4f0dc99e1d54da103d27cd9aed6760821" exitCode=0 Jan 23 09:30:05 crc kubenswrapper[4982]: I0123 09:30:05.109206 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gjzdw" event={"ID":"e739740f-4882-439c-8deb-0b28947ed7d3","Type":"ContainerDied","Data":"9df2b43aaa2efef4ab3060ea5bcadbd4f0dc99e1d54da103d27cd9aed6760821"} Jan 23 09:30:05 crc kubenswrapper[4982]: I0123 09:30:05.844071 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8" path="/var/lib/kubelet/pods/c2d7a57c-63ce-4e7a-96a3-9d440bf0e0f8/volumes" Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.420110 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.576288 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5fh8\" (UniqueName: \"kubernetes.io/projected/e739740f-4882-439c-8deb-0b28947ed7d3-kube-api-access-q5fh8\") pod \"e739740f-4882-439c-8deb-0b28947ed7d3\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.576673 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-combined-ca-bundle\") pod \"e739740f-4882-439c-8deb-0b28947ed7d3\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.576730 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-config\") pod \"e739740f-4882-439c-8deb-0b28947ed7d3\" (UID: \"e739740f-4882-439c-8deb-0b28947ed7d3\") " Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.581381 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e739740f-4882-439c-8deb-0b28947ed7d3-kube-api-access-q5fh8" (OuterVolumeSpecName: "kube-api-access-q5fh8") pod "e739740f-4882-439c-8deb-0b28947ed7d3" (UID: "e739740f-4882-439c-8deb-0b28947ed7d3"). InnerVolumeSpecName "kube-api-access-q5fh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.602150 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e739740f-4882-439c-8deb-0b28947ed7d3" (UID: "e739740f-4882-439c-8deb-0b28947ed7d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.624277 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-config" (OuterVolumeSpecName: "config") pod "e739740f-4882-439c-8deb-0b28947ed7d3" (UID: "e739740f-4882-439c-8deb-0b28947ed7d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.679189 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.679537 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e739740f-4882-439c-8deb-0b28947ed7d3-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:06 crc kubenswrapper[4982]: I0123 09:30:06.679607 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5fh8\" (UniqueName: \"kubernetes.io/projected/e739740f-4882-439c-8deb-0b28947ed7d3-kube-api-access-q5fh8\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.127052 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gjzdw" event={"ID":"e739740f-4882-439c-8deb-0b28947ed7d3","Type":"ContainerDied","Data":"5fe180a16391813021999026f2e6e195bca8d1b89db44119f3ee333f06d4ce52"} Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.127090 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe180a16391813021999026f2e6e195bca8d1b89db44119f3ee333f06d4ce52" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.127144 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gjzdw" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.279808 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-844b8b8475-vdxlh"] Jan 23 09:30:07 crc kubenswrapper[4982]: E0123 09:30:07.283880 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d65805-2205-4a3c-8333-631881106239" containerName="collect-profiles" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.284125 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d65805-2205-4a3c-8333-631881106239" containerName="collect-profiles" Jan 23 09:30:07 crc kubenswrapper[4982]: E0123 09:30:07.284149 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e739740f-4882-439c-8deb-0b28947ed7d3" containerName="neutron-db-sync" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.284156 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e739740f-4882-439c-8deb-0b28947ed7d3" containerName="neutron-db-sync" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.284419 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d65805-2205-4a3c-8333-631881106239" containerName="collect-profiles" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.284444 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e739740f-4882-439c-8deb-0b28947ed7d3" containerName="neutron-db-sync" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.285478 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.296498 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844b8b8475-vdxlh"] Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.395472 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-dns-svc\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.395519 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnc5\" (UniqueName: \"kubernetes.io/projected/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-kube-api-access-bmnc5\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.395561 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-config\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.395682 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-sb\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.395782 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-nb\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.401351 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d775cc548-46wd8"] Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.403426 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.409779 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.409779 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.410064 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-65zdb" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.410421 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.421906 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d775cc548-46wd8"] Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.499317 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-httpd-config\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.499968 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-config\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500040 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-nb\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500112 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-dns-svc\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500157 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnc5\" (UniqueName: \"kubernetes.io/projected/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-kube-api-access-bmnc5\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500197 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-config\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-combined-ca-bundle\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500341 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpr5\" (UniqueName: \"kubernetes.io/projected/69763596-6718-4eed-8a76-b09e1c7cc728-kube-api-access-xkpr5\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500379 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-ovndb-tls-certs\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.500403 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-sb\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.501522 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-sb\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.502180 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-nb\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.502868 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-dns-svc\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.503047 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-config\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.519462 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnc5\" (UniqueName: \"kubernetes.io/projected/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-kube-api-access-bmnc5\") pod \"dnsmasq-dns-844b8b8475-vdxlh\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.602316 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-httpd-config\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.602377 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-config\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.602478 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-combined-ca-bundle\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.602498 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkpr5\" (UniqueName: \"kubernetes.io/projected/69763596-6718-4eed-8a76-b09e1c7cc728-kube-api-access-xkpr5\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.602521 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-ovndb-tls-certs\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.611475 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-config\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.611546 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-combined-ca-bundle\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.612320 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-ovndb-tls-certs\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.625118 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.625330 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-httpd-config\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.677780 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkpr5\" (UniqueName: \"kubernetes.io/projected/69763596-6718-4eed-8a76-b09e1c7cc728-kube-api-access-xkpr5\") pod \"neutron-6d775cc548-46wd8\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:07 crc kubenswrapper[4982]: I0123 09:30:07.745125 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:08 crc kubenswrapper[4982]: I0123 09:30:08.403931 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844b8b8475-vdxlh"] Jan 23 09:30:08 crc kubenswrapper[4982]: I0123 09:30:08.577828 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d775cc548-46wd8"] Jan 23 09:30:08 crc kubenswrapper[4982]: W0123 09:30:08.588525 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69763596_6718_4eed_8a76_b09e1c7cc728.slice/crio-7bce7491fbc15ff3ac45e2fcea2a6007d282f92e513af7d6024191632c3fa437 WatchSource:0}: Error finding container 7bce7491fbc15ff3ac45e2fcea2a6007d282f92e513af7d6024191632c3fa437: Status 404 returned error can't find the container with id 7bce7491fbc15ff3ac45e2fcea2a6007d282f92e513af7d6024191632c3fa437 Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.146953 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d775cc548-46wd8" event={"ID":"69763596-6718-4eed-8a76-b09e1c7cc728","Type":"ContainerStarted","Data":"022d778ceca99363022548a7753ae480a40195f73a2e4dcc6d62f93e3bc70105"} Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.147302 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d775cc548-46wd8" event={"ID":"69763596-6718-4eed-8a76-b09e1c7cc728","Type":"ContainerStarted","Data":"6bcc501d6155f163996dcfdc334f5a9ff12026a022a4a99e14b17c0b93e2ac8f"} Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.147315 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d775cc548-46wd8" event={"ID":"69763596-6718-4eed-8a76-b09e1c7cc728","Type":"ContainerStarted","Data":"7bce7491fbc15ff3ac45e2fcea2a6007d282f92e513af7d6024191632c3fa437"} Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.147366 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.149212 4982 generic.go:334] "Generic (PLEG): container finished" podID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerID="a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a" exitCode=0 Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.149244 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" event={"ID":"6ef3e8dc-8e96-48b7-99b7-b54250bf889a","Type":"ContainerDied","Data":"a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a"} Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.149262 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" event={"ID":"6ef3e8dc-8e96-48b7-99b7-b54250bf889a","Type":"ContainerStarted","Data":"b22a95ac1d8eda731baefde8d760b23d84a18a2c2e9e78ec7da36d7dd93b7245"} Jan 23 09:30:09 crc kubenswrapper[4982]: I0123 09:30:09.181895 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d775cc548-46wd8" podStartSLOduration=2.181870595 podStartE2EDuration="2.181870595s" podCreationTimestamp="2026-01-23 09:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:09.167206096 +0000 UTC m=+5683.647569482" watchObservedRunningTime="2026-01-23 09:30:09.181870595 +0000 UTC m=+5683.662233981" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.159220 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" event={"ID":"6ef3e8dc-8e96-48b7-99b7-b54250bf889a","Type":"ContainerStarted","Data":"ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803"} Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.187454 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" podStartSLOduration=3.187428717 podStartE2EDuration="3.187428717s" podCreationTimestamp="2026-01-23 09:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:10.181983059 +0000 UTC m=+5684.662346445" watchObservedRunningTime="2026-01-23 09:30:10.187428717 +0000 UTC m=+5684.667792113" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.332702 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8455c4f66f-8h5kj"] Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.334724 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.336298 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.338939 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.376242 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8455c4f66f-8h5kj"] Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.493945 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-ovndb-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.494055 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-internal-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.494123 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-httpd-config\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.494150 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdnv\" (UniqueName: \"kubernetes.io/projected/600464ca-45e6-4dd1-9863-50d1f839fd83-kube-api-access-6gdnv\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.494213 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-config\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.494229 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-public-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.494272 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-combined-ca-bundle\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.595929 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-ovndb-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.595996 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-internal-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.596037 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-httpd-config\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.596059 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdnv\" (UniqueName: \"kubernetes.io/projected/600464ca-45e6-4dd1-9863-50d1f839fd83-kube-api-access-6gdnv\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.596079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-config\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.596095 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-public-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.596120 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-combined-ca-bundle\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.603705 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-httpd-config\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.603669 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-combined-ca-bundle\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.615488 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-internal-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.616093 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-ovndb-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.616471 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-public-tls-certs\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.616981 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/600464ca-45e6-4dd1-9863-50d1f839fd83-config\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.633678 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdnv\" (UniqueName: \"kubernetes.io/projected/600464ca-45e6-4dd1-9863-50d1f839fd83-kube-api-access-6gdnv\") pod \"neutron-8455c4f66f-8h5kj\" (UID: \"600464ca-45e6-4dd1-9863-50d1f839fd83\") " pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:10 crc kubenswrapper[4982]: I0123 09:30:10.690161 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:11 crc kubenswrapper[4982]: I0123 09:30:11.168240 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:11 crc kubenswrapper[4982]: I0123 09:30:11.307588 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8455c4f66f-8h5kj"] Jan 23 09:30:11 crc kubenswrapper[4982]: I0123 09:30:11.435477 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:30:11 crc kubenswrapper[4982]: I0123 09:30:11.435523 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:30:12 crc kubenswrapper[4982]: I0123 09:30:12.181219 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8455c4f66f-8h5kj" event={"ID":"600464ca-45e6-4dd1-9863-50d1f839fd83","Type":"ContainerStarted","Data":"d98efc67949e88f467ec974e1bfb1450db46e89f9368b95789c2ad3ec1bedd5d"} Jan 23 09:30:12 crc kubenswrapper[4982]: I0123 09:30:12.181519 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8455c4f66f-8h5kj" event={"ID":"600464ca-45e6-4dd1-9863-50d1f839fd83","Type":"ContainerStarted","Data":"419ec9d8fc6299b9d703d5243b5cb05bd122ee94983e1ed36d7bd0b2196f46ba"} Jan 23 09:30:12 crc kubenswrapper[4982]: I0123 09:30:12.181535 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8455c4f66f-8h5kj" event={"ID":"600464ca-45e6-4dd1-9863-50d1f839fd83","Type":"ContainerStarted","Data":"94a22528fd3cc970ee4bee5bd54ebcae33fb41fdf52b73da8d11d86ed88a30b4"} Jan 23 09:30:12 crc kubenswrapper[4982]: I0123 09:30:12.181592 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:12 crc kubenswrapper[4982]: I0123 09:30:12.209235 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8455c4f66f-8h5kj" podStartSLOduration=2.209210979 podStartE2EDuration="2.209210979s" podCreationTimestamp="2026-01-23 09:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:30:12.202658781 +0000 UTC m=+5686.683022177" watchObservedRunningTime="2026-01-23 09:30:12.209210979 +0000 UTC m=+5686.689574365" Jan 23 09:30:17 crc kubenswrapper[4982]: I0123 09:30:17.626906 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:30:17 crc kubenswrapper[4982]: I0123 09:30:17.688059 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58444476f9-xzshk"] Jan 23 09:30:17 crc kubenswrapper[4982]: I0123 09:30:17.688302 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58444476f9-xzshk" podUID="109f3866-c341-4f87-afca-28d7bcc7d820" containerName="dnsmasq-dns" containerID="cri-o://049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec" gracePeriod=10 Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.242527 4982 generic.go:334] "Generic (PLEG): container finished" podID="109f3866-c341-4f87-afca-28d7bcc7d820" containerID="049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec" exitCode=0 Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.242534 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.242559 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58444476f9-xzshk" event={"ID":"109f3866-c341-4f87-afca-28d7bcc7d820","Type":"ContainerDied","Data":"049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec"} Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.243313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58444476f9-xzshk" event={"ID":"109f3866-c341-4f87-afca-28d7bcc7d820","Type":"ContainerDied","Data":"30b17c7222b9ee869dfd4d0b89133a587a44ef122b7d3416cda34a9f859a019f"} Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.243360 4982 scope.go:117] "RemoveContainer" containerID="049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.274200 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-config\") pod \"109f3866-c341-4f87-afca-28d7bcc7d820\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.274530 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-sb\") pod \"109f3866-c341-4f87-afca-28d7bcc7d820\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.274785 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhsn9\" (UniqueName: \"kubernetes.io/projected/109f3866-c341-4f87-afca-28d7bcc7d820-kube-api-access-bhsn9\") pod \"109f3866-c341-4f87-afca-28d7bcc7d820\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.274895 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-dns-svc\") pod \"109f3866-c341-4f87-afca-28d7bcc7d820\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.274990 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-nb\") pod \"109f3866-c341-4f87-afca-28d7bcc7d820\" (UID: \"109f3866-c341-4f87-afca-28d7bcc7d820\") " Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.278146 4982 scope.go:117] "RemoveContainer" containerID="78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.308883 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109f3866-c341-4f87-afca-28d7bcc7d820-kube-api-access-bhsn9" (OuterVolumeSpecName: "kube-api-access-bhsn9") pod "109f3866-c341-4f87-afca-28d7bcc7d820" (UID: "109f3866-c341-4f87-afca-28d7bcc7d820"). InnerVolumeSpecName "kube-api-access-bhsn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.379963 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhsn9\" (UniqueName: \"kubernetes.io/projected/109f3866-c341-4f87-afca-28d7bcc7d820-kube-api-access-bhsn9\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.396297 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "109f3866-c341-4f87-afca-28d7bcc7d820" (UID: "109f3866-c341-4f87-afca-28d7bcc7d820"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.420323 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-config" (OuterVolumeSpecName: "config") pod "109f3866-c341-4f87-afca-28d7bcc7d820" (UID: "109f3866-c341-4f87-afca-28d7bcc7d820"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.433820 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "109f3866-c341-4f87-afca-28d7bcc7d820" (UID: "109f3866-c341-4f87-afca-28d7bcc7d820"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.482354 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.482393 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.482404 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.485354 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "109f3866-c341-4f87-afca-28d7bcc7d820" (UID: "109f3866-c341-4f87-afca-28d7bcc7d820"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.506847 4982 scope.go:117] "RemoveContainer" containerID="049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec" Jan 23 09:30:18 crc kubenswrapper[4982]: E0123 09:30:18.514794 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec\": container with ID starting with 049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec not found: ID does not exist" containerID="049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.514846 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec"} err="failed to get container status \"049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec\": rpc error: code = NotFound desc = could not find container \"049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec\": container with ID starting with 049158a51a09a01b51dc2f66f5b339b2fcc625dada7f455ac47532d13a271aec not found: ID does not exist" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.514876 4982 scope.go:117] "RemoveContainer" containerID="78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703" Jan 23 09:30:18 crc kubenswrapper[4982]: E0123 09:30:18.518786 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703\": container with ID starting with 78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703 not found: ID does not exist" containerID="78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.518832 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703"} err="failed to get container status \"78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703\": rpc error: code = NotFound desc = could not find container \"78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703\": container with ID starting with 78223ae6db87749f7ca3d6365359b95b0469b832c376cacac2e75df6fd3f7703 not found: ID does not exist" Jan 23 09:30:18 crc kubenswrapper[4982]: I0123 09:30:18.584475 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/109f3866-c341-4f87-afca-28d7bcc7d820-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:19 crc kubenswrapper[4982]: I0123 09:30:19.253446 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58444476f9-xzshk" Jan 23 09:30:19 crc kubenswrapper[4982]: I0123 09:30:19.289998 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58444476f9-xzshk"] Jan 23 09:30:19 crc kubenswrapper[4982]: I0123 09:30:19.298902 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58444476f9-xzshk"] Jan 23 09:30:19 crc kubenswrapper[4982]: I0123 09:30:19.837435 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109f3866-c341-4f87-afca-28d7bcc7d820" path="/var/lib/kubelet/pods/109f3866-c341-4f87-afca-28d7bcc7d820/volumes" Jan 23 09:30:37 crc kubenswrapper[4982]: I0123 09:30:37.765932 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:40 crc kubenswrapper[4982]: I0123 09:30:40.704618 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8455c4f66f-8h5kj" Jan 23 09:30:40 crc kubenswrapper[4982]: I0123 09:30:40.767469 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d775cc548-46wd8"] Jan 23 09:30:40 crc kubenswrapper[4982]: I0123 09:30:40.767727 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d775cc548-46wd8" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-api" containerID="cri-o://6bcc501d6155f163996dcfdc334f5a9ff12026a022a4a99e14b17c0b93e2ac8f" gracePeriod=30 Jan 23 09:30:40 crc kubenswrapper[4982]: I0123 09:30:40.767812 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d775cc548-46wd8" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-httpd" containerID="cri-o://022d778ceca99363022548a7753ae480a40195f73a2e4dcc6d62f93e3bc70105" gracePeriod=30 Jan 23 09:30:41 crc kubenswrapper[4982]: I0123 09:30:41.436003 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:30:41 crc kubenswrapper[4982]: I0123 09:30:41.436314 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:30:41 crc kubenswrapper[4982]: I0123 09:30:41.467958 4982 generic.go:334] "Generic (PLEG): container finished" podID="69763596-6718-4eed-8a76-b09e1c7cc728" containerID="022d778ceca99363022548a7753ae480a40195f73a2e4dcc6d62f93e3bc70105" exitCode=0 Jan 23 09:30:41 crc kubenswrapper[4982]: I0123 09:30:41.467983 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d775cc548-46wd8" event={"ID":"69763596-6718-4eed-8a76-b09e1c7cc728","Type":"ContainerDied","Data":"022d778ceca99363022548a7753ae480a40195f73a2e4dcc6d62f93e3bc70105"} Jan 23 09:30:43 crc kubenswrapper[4982]: I0123 09:30:43.604767 4982 scope.go:117] "RemoveContainer" containerID="8df22eb4ec68da08674d281fe27918ae98b130fd34db99af66509a520fba457d" Jan 23 09:30:43 crc kubenswrapper[4982]: I0123 09:30:43.627758 4982 scope.go:117] "RemoveContainer" containerID="3c0ef27efac9f10b26b8490db8f315ad9da10ce1280d739cdc63f1454ff82c24" Jan 23 09:30:45 crc kubenswrapper[4982]: I0123 09:30:45.517548 4982 generic.go:334] "Generic (PLEG): container finished" podID="69763596-6718-4eed-8a76-b09e1c7cc728" containerID="6bcc501d6155f163996dcfdc334f5a9ff12026a022a4a99e14b17c0b93e2ac8f" exitCode=0 Jan 23 09:30:45 crc kubenswrapper[4982]: I0123 09:30:45.517648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d775cc548-46wd8" event={"ID":"69763596-6718-4eed-8a76-b09e1c7cc728","Type":"ContainerDied","Data":"6bcc501d6155f163996dcfdc334f5a9ff12026a022a4a99e14b17c0b93e2ac8f"} Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.009938 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.064896 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-ovndb-tls-certs\") pod \"69763596-6718-4eed-8a76-b09e1c7cc728\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.065070 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-config\") pod \"69763596-6718-4eed-8a76-b09e1c7cc728\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.065135 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-combined-ca-bundle\") pod \"69763596-6718-4eed-8a76-b09e1c7cc728\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.065162 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkpr5\" (UniqueName: \"kubernetes.io/projected/69763596-6718-4eed-8a76-b09e1c7cc728-kube-api-access-xkpr5\") pod \"69763596-6718-4eed-8a76-b09e1c7cc728\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.065240 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-httpd-config\") pod \"69763596-6718-4eed-8a76-b09e1c7cc728\" (UID: \"69763596-6718-4eed-8a76-b09e1c7cc728\") " Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.078830 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69763596-6718-4eed-8a76-b09e1c7cc728-kube-api-access-xkpr5" (OuterVolumeSpecName: "kube-api-access-xkpr5") pod "69763596-6718-4eed-8a76-b09e1c7cc728" (UID: "69763596-6718-4eed-8a76-b09e1c7cc728"). InnerVolumeSpecName "kube-api-access-xkpr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.091892 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "69763596-6718-4eed-8a76-b09e1c7cc728" (UID: "69763596-6718-4eed-8a76-b09e1c7cc728"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.134760 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69763596-6718-4eed-8a76-b09e1c7cc728" (UID: "69763596-6718-4eed-8a76-b09e1c7cc728"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.154798 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-config" (OuterVolumeSpecName: "config") pod "69763596-6718-4eed-8a76-b09e1c7cc728" (UID: "69763596-6718-4eed-8a76-b09e1c7cc728"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.168550 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.168594 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkpr5\" (UniqueName: \"kubernetes.io/projected/69763596-6718-4eed-8a76-b09e1c7cc728-kube-api-access-xkpr5\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.168606 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.168619 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.176322 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "69763596-6718-4eed-8a76-b09e1c7cc728" (UID: "69763596-6718-4eed-8a76-b09e1c7cc728"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.270870 4982 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69763596-6718-4eed-8a76-b09e1c7cc728-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.527013 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d775cc548-46wd8" event={"ID":"69763596-6718-4eed-8a76-b09e1c7cc728","Type":"ContainerDied","Data":"7bce7491fbc15ff3ac45e2fcea2a6007d282f92e513af7d6024191632c3fa437"} Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.527066 4982 scope.go:117] "RemoveContainer" containerID="022d778ceca99363022548a7753ae480a40195f73a2e4dcc6d62f93e3bc70105" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.527066 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d775cc548-46wd8" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.556162 4982 scope.go:117] "RemoveContainer" containerID="6bcc501d6155f163996dcfdc334f5a9ff12026a022a4a99e14b17c0b93e2ac8f" Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.564336 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d775cc548-46wd8"] Jan 23 09:30:46 crc kubenswrapper[4982]: I0123 09:30:46.573527 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d775cc548-46wd8"] Jan 23 09:30:47 crc kubenswrapper[4982]: I0123 09:30:47.838457 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" path="/var/lib/kubelet/pods/69763596-6718-4eed-8a76-b09e1c7cc728/volumes" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.143316 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8hv7d"] Jan 23 09:31:00 crc kubenswrapper[4982]: E0123 09:31:00.145027 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109f3866-c341-4f87-afca-28d7bcc7d820" containerName="dnsmasq-dns" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.145105 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="109f3866-c341-4f87-afca-28d7bcc7d820" containerName="dnsmasq-dns" Jan 23 09:31:00 crc kubenswrapper[4982]: E0123 09:31:00.145492 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-httpd" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.145548 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-httpd" Jan 23 09:31:00 crc kubenswrapper[4982]: E0123 09:31:00.145635 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-api" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.145694 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-api" Jan 23 09:31:00 crc kubenswrapper[4982]: E0123 09:31:00.145759 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109f3866-c341-4f87-afca-28d7bcc7d820" containerName="init" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.145815 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="109f3866-c341-4f87-afca-28d7bcc7d820" containerName="init" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.146053 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-api" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.146124 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="69763596-6718-4eed-8a76-b09e1c7cc728" containerName="neutron-httpd" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.146189 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="109f3866-c341-4f87-afca-28d7bcc7d820" containerName="dnsmasq-dns" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.146844 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.158857 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.159076 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.159189 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.159304 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.159365 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fgj9c" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.179539 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8hv7d"] Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.241132 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-swiftconf\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.241204 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-dispersionconf\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.241226 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e957df1-1e40-4230-9db0-af1795aebbc2-etc-swift\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.241248 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-combined-ca-bundle\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.241316 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-scripts\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.241340 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-ring-data-devices\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.241488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glwc\" (UniqueName: \"kubernetes.io/projected/6e957df1-1e40-4230-9db0-af1795aebbc2-kube-api-access-5glwc\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.283756 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8696d78c59-6gzgr"] Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.285763 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.296118 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8696d78c59-6gzgr"] Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343070 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e957df1-1e40-4230-9db0-af1795aebbc2-etc-swift\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343115 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-combined-ca-bundle\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343154 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-dns-svc\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343214 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-scripts\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343239 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-ring-data-devices\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343295 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5glwc\" (UniqueName: \"kubernetes.io/projected/6e957df1-1e40-4230-9db0-af1795aebbc2-kube-api-access-5glwc\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343316 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-sb\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343345 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-config\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343369 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trsbj\" (UniqueName: \"kubernetes.io/projected/c09db274-e506-4f64-8dfa-6358ed30a801-kube-api-access-trsbj\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343392 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-nb\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343410 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-swiftconf\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.343450 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-dispersionconf\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.345180 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e957df1-1e40-4230-9db0-af1795aebbc2-etc-swift\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.345876 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-scripts\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.346540 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-ring-data-devices\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.354951 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-combined-ca-bundle\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.364958 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-dispersionconf\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.369279 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5glwc\" (UniqueName: \"kubernetes.io/projected/6e957df1-1e40-4230-9db0-af1795aebbc2-kube-api-access-5glwc\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.389148 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-swiftconf\") pod \"swift-ring-rebalance-8hv7d\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.445456 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-sb\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.445531 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-config\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.445572 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trsbj\" (UniqueName: \"kubernetes.io/projected/c09db274-e506-4f64-8dfa-6358ed30a801-kube-api-access-trsbj\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.445614 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-nb\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.445732 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-dns-svc\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.446806 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-dns-svc\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.448256 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-sb\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.448953 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-config\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.450643 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-nb\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.487645 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.497465 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trsbj\" (UniqueName: \"kubernetes.io/projected/c09db274-e506-4f64-8dfa-6358ed30a801-kube-api-access-trsbj\") pod \"dnsmasq-dns-8696d78c59-6gzgr\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:00 crc kubenswrapper[4982]: I0123 09:31:00.633185 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.014173 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8hv7d"] Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.345552 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8696d78c59-6gzgr"] Jan 23 09:31:01 crc kubenswrapper[4982]: W0123 09:31:01.346338 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc09db274_e506_4f64_8dfa_6358ed30a801.slice/crio-f89248a1824357d13a472830d39195693db36d95271042b4bb25ca2315068dfd WatchSource:0}: Error finding container f89248a1824357d13a472830d39195693db36d95271042b4bb25ca2315068dfd: Status 404 returned error can't find the container with id f89248a1824357d13a472830d39195693db36d95271042b4bb25ca2315068dfd Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.700198 4982 generic.go:334] "Generic (PLEG): container finished" podID="c09db274-e506-4f64-8dfa-6358ed30a801" containerID="73578c22f441c45208a7d981d741a7f1aed8f9ce6cb3ca368a723ff3a657b272" exitCode=0 Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.700294 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" event={"ID":"c09db274-e506-4f64-8dfa-6358ed30a801","Type":"ContainerDied","Data":"73578c22f441c45208a7d981d741a7f1aed8f9ce6cb3ca368a723ff3a657b272"} Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.700340 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" event={"ID":"c09db274-e506-4f64-8dfa-6358ed30a801","Type":"ContainerStarted","Data":"f89248a1824357d13a472830d39195693db36d95271042b4bb25ca2315068dfd"} Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.701964 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8hv7d" event={"ID":"6e957df1-1e40-4230-9db0-af1795aebbc2","Type":"ContainerStarted","Data":"9a6c486b6ad61c1836ad94041e832ec7d2696f97308f10008fa781d92f0bd332"} Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.701995 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8hv7d" event={"ID":"6e957df1-1e40-4230-9db0-af1795aebbc2","Type":"ContainerStarted","Data":"3649db45af0033d2b8d9adac165f2dba2193038e0be8d6ce5ccb3aae42c079eb"} Jan 23 09:31:01 crc kubenswrapper[4982]: I0123 09:31:01.777959 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8hv7d" podStartSLOduration=1.777919021 podStartE2EDuration="1.777919021s" podCreationTimestamp="2026-01-23 09:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:01.748367468 +0000 UTC m=+5736.228730854" watchObservedRunningTime="2026-01-23 09:31:01.777919021 +0000 UTC m=+5736.258282407" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.721309 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" event={"ID":"c09db274-e506-4f64-8dfa-6358ed30a801","Type":"ContainerStarted","Data":"013e3a8ede4cb46ae8fb5a53714cfa84d8a44224c6844e1eb773f23b4394b640"} Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.721953 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.746356 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" podStartSLOduration=2.746340914 podStartE2EDuration="2.746340914s" podCreationTimestamp="2026-01-23 09:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:02.745485701 +0000 UTC m=+5737.225849107" watchObservedRunningTime="2026-01-23 09:31:02.746340914 +0000 UTC m=+5737.226704300" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.835813 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7dc9469686-sgmtj"] Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.837442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.840308 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.858155 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7dc9469686-sgmtj"] Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.925778 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-combined-ca-bundle\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.925835 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-log-httpd\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.926070 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrws\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-kube-api-access-sbrws\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.926135 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-etc-swift\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.926397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-config-data\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:02 crc kubenswrapper[4982]: I0123 09:31:02.926444 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-run-httpd\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.027983 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-config-data\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.028038 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-run-httpd\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.028073 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-combined-ca-bundle\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.028101 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-log-httpd\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.028164 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrws\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-kube-api-access-sbrws\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.028187 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-etc-swift\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.029294 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-run-httpd\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.029869 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-log-httpd\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.037794 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-combined-ca-bundle\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.038002 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-etc-swift\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.038654 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-config-data\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.047903 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrws\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-kube-api-access-sbrws\") pod \"swift-proxy-7dc9469686-sgmtj\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.156914 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:03 crc kubenswrapper[4982]: I0123 09:31:03.862059 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7dc9469686-sgmtj"] Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.458153 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-949fbbb54-chlc7"] Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.460291 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.482094 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.482502 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.494981 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-949fbbb54-chlc7"] Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.563723 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-run-httpd\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.563807 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-config-data\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.563836 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-internal-tls-certs\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.563902 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-combined-ca-bundle\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.563959 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-etc-swift\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.563996 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-public-tls-certs\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.564037 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-log-httpd\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.564100 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7hj8\" (UniqueName: \"kubernetes.io/projected/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-kube-api-access-q7hj8\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666110 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7hj8\" (UniqueName: \"kubernetes.io/projected/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-kube-api-access-q7hj8\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666279 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-run-httpd\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666327 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-config-data\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666355 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-internal-tls-certs\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666410 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-combined-ca-bundle\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666479 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-etc-swift\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666556 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-public-tls-certs\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666615 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-log-httpd\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.666889 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-run-httpd\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.667236 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-log-httpd\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.671303 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-public-tls-certs\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.671481 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-internal-tls-certs\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.675393 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-config-data\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.677326 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-combined-ca-bundle\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.677722 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-etc-swift\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.690699 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7hj8\" (UniqueName: \"kubernetes.io/projected/7e007917-bb3f-4c30-a0a6-842f1a1cbb34-kube-api-access-q7hj8\") pod \"swift-proxy-949fbbb54-chlc7\" (UID: \"7e007917-bb3f-4c30-a0a6-842f1a1cbb34\") " pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.745150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc9469686-sgmtj" event={"ID":"a015ccf0-7a08-462a-b887-82433c164a3f","Type":"ContainerStarted","Data":"a88a2d1ecf0c306f6c9153adb07af7f2dec294aaf7c54a2411035ba6390994e2"} Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.745196 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc9469686-sgmtj" event={"ID":"a015ccf0-7a08-462a-b887-82433c164a3f","Type":"ContainerStarted","Data":"c58c6849c813271dd8b94877c1c4db04f10f7a5f59dac0fd387036b4577c2f2c"} Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.745207 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc9469686-sgmtj" event={"ID":"a015ccf0-7a08-462a-b887-82433c164a3f","Type":"ContainerStarted","Data":"50caf11ae01938f8a0bd0246da88bd86e10a6f75ba3ea5b0e855ca8af6c14502"} Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.746229 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.746256 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.777100 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7dc9469686-sgmtj" podStartSLOduration=2.77707964 podStartE2EDuration="2.77707964s" podCreationTimestamp="2026-01-23 09:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:04.764512149 +0000 UTC m=+5739.244875545" watchObservedRunningTime="2026-01-23 09:31:04.77707964 +0000 UTC m=+5739.257443026" Jan 23 09:31:04 crc kubenswrapper[4982]: I0123 09:31:04.780550 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:05 crc kubenswrapper[4982]: I0123 09:31:05.511792 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-949fbbb54-chlc7"] Jan 23 09:31:05 crc kubenswrapper[4982]: W0123 09:31:05.531929 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e007917_bb3f_4c30_a0a6_842f1a1cbb34.slice/crio-d18caec2de93a170c05759028f76de0238b59585c8a51409f943d3e90513d4b9 WatchSource:0}: Error finding container d18caec2de93a170c05759028f76de0238b59585c8a51409f943d3e90513d4b9: Status 404 returned error can't find the container with id d18caec2de93a170c05759028f76de0238b59585c8a51409f943d3e90513d4b9 Jan 23 09:31:05 crc kubenswrapper[4982]: I0123 09:31:05.764634 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-949fbbb54-chlc7" event={"ID":"7e007917-bb3f-4c30-a0a6-842f1a1cbb34","Type":"ContainerStarted","Data":"d18caec2de93a170c05759028f76de0238b59585c8a51409f943d3e90513d4b9"} Jan 23 09:31:06 crc kubenswrapper[4982]: I0123 09:31:06.780795 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-949fbbb54-chlc7" event={"ID":"7e007917-bb3f-4c30-a0a6-842f1a1cbb34","Type":"ContainerStarted","Data":"9d6e7cfe4a5521e8f38ddf557fa9a7b3bf8f251466785646f36d44aaf99895d7"} Jan 23 09:31:06 crc kubenswrapper[4982]: I0123 09:31:06.781374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-949fbbb54-chlc7" event={"ID":"7e007917-bb3f-4c30-a0a6-842f1a1cbb34","Type":"ContainerStarted","Data":"320d03ce1e966b23e40dfe301a23fd17a686e2b3553076a554e8cb57b791ce00"} Jan 23 09:31:06 crc kubenswrapper[4982]: I0123 09:31:06.781394 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:06 crc kubenswrapper[4982]: I0123 09:31:06.781409 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:06 crc kubenswrapper[4982]: I0123 09:31:06.783470 4982 generic.go:334] "Generic (PLEG): container finished" podID="6e957df1-1e40-4230-9db0-af1795aebbc2" containerID="9a6c486b6ad61c1836ad94041e832ec7d2696f97308f10008fa781d92f0bd332" exitCode=0 Jan 23 09:31:06 crc kubenswrapper[4982]: I0123 09:31:06.783558 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8hv7d" event={"ID":"6e957df1-1e40-4230-9db0-af1795aebbc2","Type":"ContainerDied","Data":"9a6c486b6ad61c1836ad94041e832ec7d2696f97308f10008fa781d92f0bd332"} Jan 23 09:31:06 crc kubenswrapper[4982]: I0123 09:31:06.815817 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-949fbbb54-chlc7" podStartSLOduration=2.815784884 podStartE2EDuration="2.815784884s" podCreationTimestamp="2026-01-23 09:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:06.801410043 +0000 UTC m=+5741.281773439" watchObservedRunningTime="2026-01-23 09:31:06.815784884 +0000 UTC m=+5741.296148270" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.159153 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.243179 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-combined-ca-bundle\") pod \"6e957df1-1e40-4230-9db0-af1795aebbc2\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.243243 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-scripts\") pod \"6e957df1-1e40-4230-9db0-af1795aebbc2\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.243429 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-swiftconf\") pod \"6e957df1-1e40-4230-9db0-af1795aebbc2\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.243459 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-ring-data-devices\") pod \"6e957df1-1e40-4230-9db0-af1795aebbc2\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.243485 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-dispersionconf\") pod \"6e957df1-1e40-4230-9db0-af1795aebbc2\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.243512 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5glwc\" (UniqueName: \"kubernetes.io/projected/6e957df1-1e40-4230-9db0-af1795aebbc2-kube-api-access-5glwc\") pod \"6e957df1-1e40-4230-9db0-af1795aebbc2\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.243554 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e957df1-1e40-4230-9db0-af1795aebbc2-etc-swift\") pod \"6e957df1-1e40-4230-9db0-af1795aebbc2\" (UID: \"6e957df1-1e40-4230-9db0-af1795aebbc2\") " Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.244249 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6e957df1-1e40-4230-9db0-af1795aebbc2" (UID: "6e957df1-1e40-4230-9db0-af1795aebbc2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.244702 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e957df1-1e40-4230-9db0-af1795aebbc2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6e957df1-1e40-4230-9db0-af1795aebbc2" (UID: "6e957df1-1e40-4230-9db0-af1795aebbc2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.244918 4982 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.244941 4982 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6e957df1-1e40-4230-9db0-af1795aebbc2-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.249075 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e957df1-1e40-4230-9db0-af1795aebbc2-kube-api-access-5glwc" (OuterVolumeSpecName: "kube-api-access-5glwc") pod "6e957df1-1e40-4230-9db0-af1795aebbc2" (UID: "6e957df1-1e40-4230-9db0-af1795aebbc2"). InnerVolumeSpecName "kube-api-access-5glwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.251574 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6e957df1-1e40-4230-9db0-af1795aebbc2" (UID: "6e957df1-1e40-4230-9db0-af1795aebbc2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.267897 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-scripts" (OuterVolumeSpecName: "scripts") pod "6e957df1-1e40-4230-9db0-af1795aebbc2" (UID: "6e957df1-1e40-4230-9db0-af1795aebbc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.270758 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6e957df1-1e40-4230-9db0-af1795aebbc2" (UID: "6e957df1-1e40-4230-9db0-af1795aebbc2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.275899 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e957df1-1e40-4230-9db0-af1795aebbc2" (UID: "6e957df1-1e40-4230-9db0-af1795aebbc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.346736 4982 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.347043 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5glwc\" (UniqueName: \"kubernetes.io/projected/6e957df1-1e40-4230-9db0-af1795aebbc2-kube-api-access-5glwc\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.347057 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.347066 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e957df1-1e40-4230-9db0-af1795aebbc2-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.347074 4982 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6e957df1-1e40-4230-9db0-af1795aebbc2-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.813886 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8hv7d" event={"ID":"6e957df1-1e40-4230-9db0-af1795aebbc2","Type":"ContainerDied","Data":"3649db45af0033d2b8d9adac165f2dba2193038e0be8d6ce5ccb3aae42c079eb"} Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.813933 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3649db45af0033d2b8d9adac165f2dba2193038e0be8d6ce5ccb3aae42c079eb" Jan 23 09:31:08 crc kubenswrapper[4982]: I0123 09:31:08.813962 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8hv7d" Jan 23 09:31:10 crc kubenswrapper[4982]: I0123 09:31:10.635922 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:10 crc kubenswrapper[4982]: I0123 09:31:10.705700 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844b8b8475-vdxlh"] Jan 23 09:31:10 crc kubenswrapper[4982]: I0123 09:31:10.706129 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" podUID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerName="dnsmasq-dns" containerID="cri-o://ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803" gracePeriod=10 Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.435907 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.435955 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.435993 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.436593 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.436663 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" gracePeriod=600 Jan 23 09:31:11 crc kubenswrapper[4982]: E0123 09:31:11.574568 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.702295 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.819787 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-dns-svc\") pod \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.819950 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnc5\" (UniqueName: \"kubernetes.io/projected/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-kube-api-access-bmnc5\") pod \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.820009 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-config\") pod \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.820031 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-sb\") pod \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.820110 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-nb\") pod \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\" (UID: \"6ef3e8dc-8e96-48b7-99b7-b54250bf889a\") " Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.830891 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-kube-api-access-bmnc5" (OuterVolumeSpecName: "kube-api-access-bmnc5") pod "6ef3e8dc-8e96-48b7-99b7-b54250bf889a" (UID: "6ef3e8dc-8e96-48b7-99b7-b54250bf889a"). InnerVolumeSpecName "kube-api-access-bmnc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.860017 4982 generic.go:334] "Generic (PLEG): container finished" podID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerID="ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803" exitCode=0 Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.860188 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.865076 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" exitCode=0 Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.871088 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ef3e8dc-8e96-48b7-99b7-b54250bf889a" (UID: "6ef3e8dc-8e96-48b7-99b7-b54250bf889a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.894113 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ef3e8dc-8e96-48b7-99b7-b54250bf889a" (UID: "6ef3e8dc-8e96-48b7-99b7-b54250bf889a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.898273 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-config" (OuterVolumeSpecName: "config") pod "6ef3e8dc-8e96-48b7-99b7-b54250bf889a" (UID: "6ef3e8dc-8e96-48b7-99b7-b54250bf889a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.906355 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ef3e8dc-8e96-48b7-99b7-b54250bf889a" (UID: "6ef3e8dc-8e96-48b7-99b7-b54250bf889a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.922989 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.923025 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.923037 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmnc5\" (UniqueName: \"kubernetes.io/projected/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-kube-api-access-bmnc5\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.923051 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.923063 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef3e8dc-8e96-48b7-99b7-b54250bf889a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.955766 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" event={"ID":"6ef3e8dc-8e96-48b7-99b7-b54250bf889a","Type":"ContainerDied","Data":"ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803"} Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.956106 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844b8b8475-vdxlh" event={"ID":"6ef3e8dc-8e96-48b7-99b7-b54250bf889a","Type":"ContainerDied","Data":"b22a95ac1d8eda731baefde8d760b23d84a18a2c2e9e78ec7da36d7dd93b7245"} Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.956208 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd"} Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.956170 4982 scope.go:117] "RemoveContainer" containerID="ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.957183 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:31:11 crc kubenswrapper[4982]: E0123 09:31:11.957576 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:31:11 crc kubenswrapper[4982]: I0123 09:31:11.988046 4982 scope.go:117] "RemoveContainer" containerID="a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a" Jan 23 09:31:12 crc kubenswrapper[4982]: I0123 09:31:12.021535 4982 scope.go:117] "RemoveContainer" containerID="ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803" Jan 23 09:31:12 crc kubenswrapper[4982]: E0123 09:31:12.022485 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803\": container with ID starting with ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803 not found: ID does not exist" containerID="ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803" Jan 23 09:31:12 crc kubenswrapper[4982]: I0123 09:31:12.022617 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803"} err="failed to get container status \"ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803\": rpc error: code = NotFound desc = could not find container \"ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803\": container with ID starting with ba8f023b7f3bd42aec116d1a02ecfdb3390aaa398628215432796dea698ba803 not found: ID does not exist" Jan 23 09:31:12 crc kubenswrapper[4982]: I0123 09:31:12.022737 4982 scope.go:117] "RemoveContainer" containerID="a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a" Jan 23 09:31:12 crc kubenswrapper[4982]: E0123 09:31:12.024684 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a\": container with ID starting with a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a not found: ID does not exist" containerID="a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a" Jan 23 09:31:12 crc kubenswrapper[4982]: I0123 09:31:12.024729 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a"} err="failed to get container status \"a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a\": rpc error: code = NotFound desc = could not find container \"a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a\": container with ID starting with a21beaa0931c5f0bc643b85a5b25545504bd92c15d0d70fc8dcedb3244b1cd8a not found: ID does not exist" Jan 23 09:31:12 crc kubenswrapper[4982]: I0123 09:31:12.024759 4982 scope.go:117] "RemoveContainer" containerID="7fe69a70f46ce57d41d7e5c077b325e9c4d29babec4475eb09ec573022ab581a" Jan 23 09:31:12 crc kubenswrapper[4982]: I0123 09:31:12.193810 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844b8b8475-vdxlh"] Jan 23 09:31:12 crc kubenswrapper[4982]: I0123 09:31:12.204851 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-844b8b8475-vdxlh"] Jan 23 09:31:13 crc kubenswrapper[4982]: I0123 09:31:13.161503 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:13 crc kubenswrapper[4982]: I0123 09:31:13.162167 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:13 crc kubenswrapper[4982]: I0123 09:31:13.837778 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" path="/var/lib/kubelet/pods/6ef3e8dc-8e96-48b7-99b7-b54250bf889a/volumes" Jan 23 09:31:14 crc kubenswrapper[4982]: I0123 09:31:14.785509 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:14 crc kubenswrapper[4982]: I0123 09:31:14.787682 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-949fbbb54-chlc7" Jan 23 09:31:14 crc kubenswrapper[4982]: I0123 09:31:14.874372 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7dc9469686-sgmtj"] Jan 23 09:31:14 crc kubenswrapper[4982]: I0123 09:31:14.874734 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7dc9469686-sgmtj" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-httpd" containerID="cri-o://c58c6849c813271dd8b94877c1c4db04f10f7a5f59dac0fd387036b4577c2f2c" gracePeriod=30 Jan 23 09:31:14 crc kubenswrapper[4982]: I0123 09:31:14.874935 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-7dc9469686-sgmtj" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-server" containerID="cri-o://a88a2d1ecf0c306f6c9153adb07af7f2dec294aaf7c54a2411035ba6390994e2" gracePeriod=30 Jan 23 09:31:15 crc kubenswrapper[4982]: I0123 09:31:15.916508 4982 generic.go:334] "Generic (PLEG): container finished" podID="a015ccf0-7a08-462a-b887-82433c164a3f" containerID="a88a2d1ecf0c306f6c9153adb07af7f2dec294aaf7c54a2411035ba6390994e2" exitCode=0 Jan 23 09:31:15 crc kubenswrapper[4982]: I0123 09:31:15.916883 4982 generic.go:334] "Generic (PLEG): container finished" podID="a015ccf0-7a08-462a-b887-82433c164a3f" containerID="c58c6849c813271dd8b94877c1c4db04f10f7a5f59dac0fd387036b4577c2f2c" exitCode=0 Jan 23 09:31:15 crc kubenswrapper[4982]: I0123 09:31:15.916913 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc9469686-sgmtj" event={"ID":"a015ccf0-7a08-462a-b887-82433c164a3f","Type":"ContainerDied","Data":"a88a2d1ecf0c306f6c9153adb07af7f2dec294aaf7c54a2411035ba6390994e2"} Jan 23 09:31:15 crc kubenswrapper[4982]: I0123 09:31:15.916943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc9469686-sgmtj" event={"ID":"a015ccf0-7a08-462a-b887-82433c164a3f","Type":"ContainerDied","Data":"c58c6849c813271dd8b94877c1c4db04f10f7a5f59dac0fd387036b4577c2f2c"} Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.531831 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.656394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-run-httpd\") pod \"a015ccf0-7a08-462a-b887-82433c164a3f\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.656500 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-log-httpd\") pod \"a015ccf0-7a08-462a-b887-82433c164a3f\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.656540 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-config-data\") pod \"a015ccf0-7a08-462a-b887-82433c164a3f\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.656666 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbrws\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-kube-api-access-sbrws\") pod \"a015ccf0-7a08-462a-b887-82433c164a3f\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.656765 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-combined-ca-bundle\") pod \"a015ccf0-7a08-462a-b887-82433c164a3f\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.656830 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-etc-swift\") pod \"a015ccf0-7a08-462a-b887-82433c164a3f\" (UID: \"a015ccf0-7a08-462a-b887-82433c164a3f\") " Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.660336 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a015ccf0-7a08-462a-b887-82433c164a3f" (UID: "a015ccf0-7a08-462a-b887-82433c164a3f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.661059 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a015ccf0-7a08-462a-b887-82433c164a3f" (UID: "a015ccf0-7a08-462a-b887-82433c164a3f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.683936 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a015ccf0-7a08-462a-b887-82433c164a3f" (UID: "a015ccf0-7a08-462a-b887-82433c164a3f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.722879 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-kube-api-access-sbrws" (OuterVolumeSpecName: "kube-api-access-sbrws") pod "a015ccf0-7a08-462a-b887-82433c164a3f" (UID: "a015ccf0-7a08-462a-b887-82433c164a3f"). InnerVolumeSpecName "kube-api-access-sbrws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.760777 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-config-data" (OuterVolumeSpecName: "config-data") pod "a015ccf0-7a08-462a-b887-82433c164a3f" (UID: "a015ccf0-7a08-462a-b887-82433c164a3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.762109 4982 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.762132 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.762142 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a015ccf0-7a08-462a-b887-82433c164a3f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.762153 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.762163 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbrws\" (UniqueName: \"kubernetes.io/projected/a015ccf0-7a08-462a-b887-82433c164a3f-kube-api-access-sbrws\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.790738 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a015ccf0-7a08-462a-b887-82433c164a3f" (UID: "a015ccf0-7a08-462a-b887-82433c164a3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.864318 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a015ccf0-7a08-462a-b887-82433c164a3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.927118 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7dc9469686-sgmtj" event={"ID":"a015ccf0-7a08-462a-b887-82433c164a3f","Type":"ContainerDied","Data":"50caf11ae01938f8a0bd0246da88bd86e10a6f75ba3ea5b0e855ca8af6c14502"} Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.927165 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7dc9469686-sgmtj" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.927184 4982 scope.go:117] "RemoveContainer" containerID="a88a2d1ecf0c306f6c9153adb07af7f2dec294aaf7c54a2411035ba6390994e2" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.956795 4982 scope.go:117] "RemoveContainer" containerID="c58c6849c813271dd8b94877c1c4db04f10f7a5f59dac0fd387036b4577c2f2c" Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.970290 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-7dc9469686-sgmtj"] Jan 23 09:31:16 crc kubenswrapper[4982]: I0123 09:31:16.982545 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-7dc9469686-sgmtj"] Jan 23 09:31:17 crc kubenswrapper[4982]: I0123 09:31:17.839069 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" path="/var/lib/kubelet/pods/a015ccf0-7a08-462a-b887-82433c164a3f/volumes" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.011591 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8f8bd"] Jan 23 09:31:21 crc kubenswrapper[4982]: E0123 09:31:21.012336 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerName="init" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012353 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerName="init" Jan 23 09:31:21 crc kubenswrapper[4982]: E0123 09:31:21.012368 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerName="dnsmasq-dns" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012376 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerName="dnsmasq-dns" Jan 23 09:31:21 crc kubenswrapper[4982]: E0123 09:31:21.012415 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-httpd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012422 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-httpd" Jan 23 09:31:21 crc kubenswrapper[4982]: E0123 09:31:21.012433 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e957df1-1e40-4230-9db0-af1795aebbc2" containerName="swift-ring-rebalance" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012440 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e957df1-1e40-4230-9db0-af1795aebbc2" containerName="swift-ring-rebalance" Jan 23 09:31:21 crc kubenswrapper[4982]: E0123 09:31:21.012455 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-server" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012462 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-server" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012691 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-httpd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012740 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef3e8dc-8e96-48b7-99b7-b54250bf889a" containerName="dnsmasq-dns" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012760 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e957df1-1e40-4230-9db0-af1795aebbc2" containerName="swift-ring-rebalance" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.012778 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a015ccf0-7a08-462a-b887-82433c164a3f" containerName="proxy-server" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.013580 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.038098 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f518-account-create-update-qxfr8"] Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.039351 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8f8bd"] Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.039442 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.042133 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.048837 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f518-account-create-update-qxfr8"] Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.142684 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlpc\" (UniqueName: \"kubernetes.io/projected/5a5d8863-69f4-487e-b334-344289519860-kube-api-access-jwlpc\") pod \"cinder-f518-account-create-update-qxfr8\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.143140 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f723816-9cde-4d95-af5b-6028f03f3374-operator-scripts\") pod \"cinder-db-create-8f8bd\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.143169 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5d8863-69f4-487e-b334-344289519860-operator-scripts\") pod \"cinder-f518-account-create-update-qxfr8\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.143386 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5tm\" (UniqueName: \"kubernetes.io/projected/8f723816-9cde-4d95-af5b-6028f03f3374-kube-api-access-lb5tm\") pod \"cinder-db-create-8f8bd\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.245532 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5d8863-69f4-487e-b334-344289519860-operator-scripts\") pod \"cinder-f518-account-create-update-qxfr8\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.245578 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f723816-9cde-4d95-af5b-6028f03f3374-operator-scripts\") pod \"cinder-db-create-8f8bd\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.245665 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb5tm\" (UniqueName: \"kubernetes.io/projected/8f723816-9cde-4d95-af5b-6028f03f3374-kube-api-access-lb5tm\") pod \"cinder-db-create-8f8bd\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.245762 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwlpc\" (UniqueName: \"kubernetes.io/projected/5a5d8863-69f4-487e-b334-344289519860-kube-api-access-jwlpc\") pod \"cinder-f518-account-create-update-qxfr8\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.246637 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5d8863-69f4-487e-b334-344289519860-operator-scripts\") pod \"cinder-f518-account-create-update-qxfr8\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.246646 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f723816-9cde-4d95-af5b-6028f03f3374-operator-scripts\") pod \"cinder-db-create-8f8bd\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.265271 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwlpc\" (UniqueName: \"kubernetes.io/projected/5a5d8863-69f4-487e-b334-344289519860-kube-api-access-jwlpc\") pod \"cinder-f518-account-create-update-qxfr8\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.267307 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb5tm\" (UniqueName: \"kubernetes.io/projected/8f723816-9cde-4d95-af5b-6028f03f3374-kube-api-access-lb5tm\") pod \"cinder-db-create-8f8bd\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.366333 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.378201 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:21 crc kubenswrapper[4982]: W0123 09:31:21.860308 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f723816_9cde_4d95_af5b_6028f03f3374.slice/crio-42d0e668c5211460fd06817531bc795c713b69ba5af65ac98650241438e27ce4 WatchSource:0}: Error finding container 42d0e668c5211460fd06817531bc795c713b69ba5af65ac98650241438e27ce4: Status 404 returned error can't find the container with id 42d0e668c5211460fd06817531bc795c713b69ba5af65ac98650241438e27ce4 Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.863144 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8f8bd"] Jan 23 09:31:21 crc kubenswrapper[4982]: I0123 09:31:21.941591 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f518-account-create-update-qxfr8"] Jan 23 09:31:21 crc kubenswrapper[4982]: W0123 09:31:21.947195 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a5d8863_69f4_487e_b334_344289519860.slice/crio-c89e04e96a8ffa570887b2b0c2d8b0d54dd8bcba12e5d09514087c618539a4b7 WatchSource:0}: Error finding container c89e04e96a8ffa570887b2b0c2d8b0d54dd8bcba12e5d09514087c618539a4b7: Status 404 returned error can't find the container with id c89e04e96a8ffa570887b2b0c2d8b0d54dd8bcba12e5d09514087c618539a4b7 Jan 23 09:31:22 crc kubenswrapper[4982]: I0123 09:31:22.003142 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8f8bd" event={"ID":"8f723816-9cde-4d95-af5b-6028f03f3374","Type":"ContainerStarted","Data":"42d0e668c5211460fd06817531bc795c713b69ba5af65ac98650241438e27ce4"} Jan 23 09:31:22 crc kubenswrapper[4982]: I0123 09:31:22.004418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f518-account-create-update-qxfr8" event={"ID":"5a5d8863-69f4-487e-b334-344289519860","Type":"ContainerStarted","Data":"c89e04e96a8ffa570887b2b0c2d8b0d54dd8bcba12e5d09514087c618539a4b7"} Jan 23 09:31:23 crc kubenswrapper[4982]: I0123 09:31:23.018430 4982 generic.go:334] "Generic (PLEG): container finished" podID="8f723816-9cde-4d95-af5b-6028f03f3374" containerID="8bfd2cc213a7971067bf30d1738fdf7e03ae0ba5eee3d2f43c0f8550c9cae3c0" exitCode=0 Jan 23 09:31:23 crc kubenswrapper[4982]: I0123 09:31:23.018571 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8f8bd" event={"ID":"8f723816-9cde-4d95-af5b-6028f03f3374","Type":"ContainerDied","Data":"8bfd2cc213a7971067bf30d1738fdf7e03ae0ba5eee3d2f43c0f8550c9cae3c0"} Jan 23 09:31:23 crc kubenswrapper[4982]: I0123 09:31:23.021593 4982 generic.go:334] "Generic (PLEG): container finished" podID="5a5d8863-69f4-487e-b334-344289519860" containerID="9a6261d8400f110a55ebd39650098ebe81199a32c40d4b69f0f2936678c54856" exitCode=0 Jan 23 09:31:23 crc kubenswrapper[4982]: I0123 09:31:23.021744 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f518-account-create-update-qxfr8" event={"ID":"5a5d8863-69f4-487e-b334-344289519860","Type":"ContainerDied","Data":"9a6261d8400f110a55ebd39650098ebe81199a32c40d4b69f0f2936678c54856"} Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.488680 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.497382 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.612354 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwlpc\" (UniqueName: \"kubernetes.io/projected/5a5d8863-69f4-487e-b334-344289519860-kube-api-access-jwlpc\") pod \"5a5d8863-69f4-487e-b334-344289519860\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.612422 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5d8863-69f4-487e-b334-344289519860-operator-scripts\") pod \"5a5d8863-69f4-487e-b334-344289519860\" (UID: \"5a5d8863-69f4-487e-b334-344289519860\") " Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.612546 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb5tm\" (UniqueName: \"kubernetes.io/projected/8f723816-9cde-4d95-af5b-6028f03f3374-kube-api-access-lb5tm\") pod \"8f723816-9cde-4d95-af5b-6028f03f3374\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.613170 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f723816-9cde-4d95-af5b-6028f03f3374-operator-scripts\") pod \"8f723816-9cde-4d95-af5b-6028f03f3374\" (UID: \"8f723816-9cde-4d95-af5b-6028f03f3374\") " Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.613895 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a5d8863-69f4-487e-b334-344289519860-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a5d8863-69f4-487e-b334-344289519860" (UID: "5a5d8863-69f4-487e-b334-344289519860"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.613954 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f723816-9cde-4d95-af5b-6028f03f3374-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f723816-9cde-4d95-af5b-6028f03f3374" (UID: "8f723816-9cde-4d95-af5b-6028f03f3374"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.614125 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a5d8863-69f4-487e-b334-344289519860-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.614150 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f723816-9cde-4d95-af5b-6028f03f3374-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.618757 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f723816-9cde-4d95-af5b-6028f03f3374-kube-api-access-lb5tm" (OuterVolumeSpecName: "kube-api-access-lb5tm") pod "8f723816-9cde-4d95-af5b-6028f03f3374" (UID: "8f723816-9cde-4d95-af5b-6028f03f3374"). InnerVolumeSpecName "kube-api-access-lb5tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.619291 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5d8863-69f4-487e-b334-344289519860-kube-api-access-jwlpc" (OuterVolumeSpecName: "kube-api-access-jwlpc") pod "5a5d8863-69f4-487e-b334-344289519860" (UID: "5a5d8863-69f4-487e-b334-344289519860"). InnerVolumeSpecName "kube-api-access-jwlpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.717125 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb5tm\" (UniqueName: \"kubernetes.io/projected/8f723816-9cde-4d95-af5b-6028f03f3374-kube-api-access-lb5tm\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:24 crc kubenswrapper[4982]: I0123 09:31:24.717160 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwlpc\" (UniqueName: \"kubernetes.io/projected/5a5d8863-69f4-487e-b334-344289519860-kube-api-access-jwlpc\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:25 crc kubenswrapper[4982]: I0123 09:31:25.041073 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f518-account-create-update-qxfr8" event={"ID":"5a5d8863-69f4-487e-b334-344289519860","Type":"ContainerDied","Data":"c89e04e96a8ffa570887b2b0c2d8b0d54dd8bcba12e5d09514087c618539a4b7"} Jan 23 09:31:25 crc kubenswrapper[4982]: I0123 09:31:25.041101 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f518-account-create-update-qxfr8" Jan 23 09:31:25 crc kubenswrapper[4982]: I0123 09:31:25.041122 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89e04e96a8ffa570887b2b0c2d8b0d54dd8bcba12e5d09514087c618539a4b7" Jan 23 09:31:25 crc kubenswrapper[4982]: I0123 09:31:25.046300 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8f8bd" event={"ID":"8f723816-9cde-4d95-af5b-6028f03f3374","Type":"ContainerDied","Data":"42d0e668c5211460fd06817531bc795c713b69ba5af65ac98650241438e27ce4"} Jan 23 09:31:25 crc kubenswrapper[4982]: I0123 09:31:25.046351 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42d0e668c5211460fd06817531bc795c713b69ba5af65ac98650241438e27ce4" Jan 23 09:31:25 crc kubenswrapper[4982]: I0123 09:31:25.046566 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8f8bd" Jan 23 09:31:25 crc kubenswrapper[4982]: I0123 09:31:25.832480 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:31:25 crc kubenswrapper[4982]: E0123 09:31:25.833027 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.321703 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-bs57c"] Jan 23 09:31:26 crc kubenswrapper[4982]: E0123 09:31:26.322134 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5d8863-69f4-487e-b334-344289519860" containerName="mariadb-account-create-update" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.322155 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5d8863-69f4-487e-b334-344289519860" containerName="mariadb-account-create-update" Jan 23 09:31:26 crc kubenswrapper[4982]: E0123 09:31:26.322174 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f723816-9cde-4d95-af5b-6028f03f3374" containerName="mariadb-database-create" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.322180 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f723816-9cde-4d95-af5b-6028f03f3374" containerName="mariadb-database-create" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.322395 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f723816-9cde-4d95-af5b-6028f03f3374" containerName="mariadb-database-create" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.322414 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5d8863-69f4-487e-b334-344289519860" containerName="mariadb-account-create-update" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.323063 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.329546 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nbrsv" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.329873 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.330135 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.334705 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bs57c"] Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.453381 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-combined-ca-bundle\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.453658 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-etc-machine-id\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.453874 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vqlx\" (UniqueName: \"kubernetes.io/projected/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-kube-api-access-8vqlx\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.453987 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-db-sync-config-data\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.454256 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-scripts\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.454509 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-config-data\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.556736 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-scripts\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.557063 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-config-data\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.557198 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-combined-ca-bundle\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.557436 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-etc-machine-id\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.557494 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-etc-machine-id\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.557588 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vqlx\" (UniqueName: \"kubernetes.io/projected/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-kube-api-access-8vqlx\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.557816 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-db-sync-config-data\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.561552 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-combined-ca-bundle\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.562581 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-config-data\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.565054 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-db-sync-config-data\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.576146 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-scripts\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.576400 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vqlx\" (UniqueName: \"kubernetes.io/projected/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-kube-api-access-8vqlx\") pod \"cinder-db-sync-bs57c\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:26 crc kubenswrapper[4982]: I0123 09:31:26.649701 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:27 crc kubenswrapper[4982]: I0123 09:31:27.096108 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-bs57c"] Jan 23 09:31:28 crc kubenswrapper[4982]: I0123 09:31:28.076044 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bs57c" event={"ID":"1e4d055f-cbd3-4a1f-8b27-dd85681dffff","Type":"ContainerStarted","Data":"e14355e90e6dd264bcc2375923b70e832db4cbd1de90d4598cefd60ca8e3ea24"} Jan 23 09:31:28 crc kubenswrapper[4982]: I0123 09:31:28.076677 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bs57c" event={"ID":"1e4d055f-cbd3-4a1f-8b27-dd85681dffff","Type":"ContainerStarted","Data":"fc2fb1d77f62bf0bb6bd41cd50346eff07ff895eae210ab0cbb62243b0cfcd34"} Jan 23 09:31:28 crc kubenswrapper[4982]: I0123 09:31:28.099280 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-bs57c" podStartSLOduration=2.099264558 podStartE2EDuration="2.099264558s" podCreationTimestamp="2026-01-23 09:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:28.0915944 +0000 UTC m=+5762.571957786" watchObservedRunningTime="2026-01-23 09:31:28.099264558 +0000 UTC m=+5762.579627934" Jan 23 09:31:32 crc kubenswrapper[4982]: I0123 09:31:32.048377 4982 generic.go:334] "Generic (PLEG): container finished" podID="1e4d055f-cbd3-4a1f-8b27-dd85681dffff" containerID="e14355e90e6dd264bcc2375923b70e832db4cbd1de90d4598cefd60ca8e3ea24" exitCode=0 Jan 23 09:31:32 crc kubenswrapper[4982]: I0123 09:31:32.048550 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bs57c" event={"ID":"1e4d055f-cbd3-4a1f-8b27-dd85681dffff","Type":"ContainerDied","Data":"e14355e90e6dd264bcc2375923b70e832db4cbd1de90d4598cefd60ca8e3ea24"} Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.401818 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.590376 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-config-data\") pod \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.590498 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-db-sync-config-data\") pod \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.590593 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-etc-machine-id\") pod \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.590687 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-scripts\") pod \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.590722 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-combined-ca-bundle\") pod \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.590756 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vqlx\" (UniqueName: \"kubernetes.io/projected/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-kube-api-access-8vqlx\") pod \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\" (UID: \"1e4d055f-cbd3-4a1f-8b27-dd85681dffff\") " Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.591417 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1e4d055f-cbd3-4a1f-8b27-dd85681dffff" (UID: "1e4d055f-cbd3-4a1f-8b27-dd85681dffff"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.597750 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1e4d055f-cbd3-4a1f-8b27-dd85681dffff" (UID: "1e4d055f-cbd3-4a1f-8b27-dd85681dffff"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.598104 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-scripts" (OuterVolumeSpecName: "scripts") pod "1e4d055f-cbd3-4a1f-8b27-dd85681dffff" (UID: "1e4d055f-cbd3-4a1f-8b27-dd85681dffff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.608822 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-kube-api-access-8vqlx" (OuterVolumeSpecName: "kube-api-access-8vqlx") pod "1e4d055f-cbd3-4a1f-8b27-dd85681dffff" (UID: "1e4d055f-cbd3-4a1f-8b27-dd85681dffff"). InnerVolumeSpecName "kube-api-access-8vqlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.625061 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e4d055f-cbd3-4a1f-8b27-dd85681dffff" (UID: "1e4d055f-cbd3-4a1f-8b27-dd85681dffff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.643277 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-config-data" (OuterVolumeSpecName: "config-data") pod "1e4d055f-cbd3-4a1f-8b27-dd85681dffff" (UID: "1e4d055f-cbd3-4a1f-8b27-dd85681dffff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.692782 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.693001 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.693068 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vqlx\" (UniqueName: \"kubernetes.io/projected/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-kube-api-access-8vqlx\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.693123 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.693174 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:33 crc kubenswrapper[4982]: I0123 09:31:33.693224 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1e4d055f-cbd3-4a1f-8b27-dd85681dffff-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.070809 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-bs57c" event={"ID":"1e4d055f-cbd3-4a1f-8b27-dd85681dffff","Type":"ContainerDied","Data":"fc2fb1d77f62bf0bb6bd41cd50346eff07ff895eae210ab0cbb62243b0cfcd34"} Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.070855 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc2fb1d77f62bf0bb6bd41cd50346eff07ff895eae210ab0cbb62243b0cfcd34" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.070877 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-bs57c" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.429404 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-555f4b56f5-z4cxc"] Jan 23 09:31:34 crc kubenswrapper[4982]: E0123 09:31:34.433987 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4d055f-cbd3-4a1f-8b27-dd85681dffff" containerName="cinder-db-sync" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.434025 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4d055f-cbd3-4a1f-8b27-dd85681dffff" containerName="cinder-db-sync" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.434309 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4d055f-cbd3-4a1f-8b27-dd85681dffff" containerName="cinder-db-sync" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.435309 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.449307 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-555f4b56f5-z4cxc"] Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.594705 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.596452 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.604662 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.604751 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.604895 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nbrsv" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.605009 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.618874 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlrkv\" (UniqueName: \"kubernetes.io/projected/78597e4e-a98c-4321-b6b8-fcd131d271e5-kube-api-access-wlrkv\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.618923 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-nb\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.619038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-sb\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.619060 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-dns-svc\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.619090 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-config\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.619238 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-sb\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721380 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-dns-svc\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721415 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-config\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721457 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721589 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721656 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721681 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-logs\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721705 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffccr\" (UniqueName: \"kubernetes.io/projected/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-kube-api-access-ffccr\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlrkv\" (UniqueName: \"kubernetes.io/projected/78597e4e-a98c-4321-b6b8-fcd131d271e5-kube-api-access-wlrkv\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721865 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-nb\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721937 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.721972 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-scripts\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.722522 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-sb\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.722723 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-config\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.722779 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-nb\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.723064 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-dns-svc\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.741644 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlrkv\" (UniqueName: \"kubernetes.io/projected/78597e4e-a98c-4321-b6b8-fcd131d271e5-kube-api-access-wlrkv\") pod \"dnsmasq-dns-555f4b56f5-z4cxc\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.781840 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.824200 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.824958 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.825061 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.825086 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-logs\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.825111 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffccr\" (UniqueName: \"kubernetes.io/projected/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-kube-api-access-ffccr\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.825143 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.825169 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-scripts\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.825799 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.828710 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data-custom\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.829219 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-logs\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.830101 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.830428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-scripts\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.831804 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.846291 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffccr\" (UniqueName: \"kubernetes.io/projected/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-kube-api-access-ffccr\") pod \"cinder-api-0\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " pod="openstack/cinder-api-0" Jan 23 09:31:34 crc kubenswrapper[4982]: I0123 09:31:34.928388 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:31:35 crc kubenswrapper[4982]: I0123 09:31:35.361826 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-555f4b56f5-z4cxc"] Jan 23 09:31:35 crc kubenswrapper[4982]: I0123 09:31:35.511859 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:35 crc kubenswrapper[4982]: W0123 09:31:35.516452 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdd865fd_45f5_4ef2_ab50_875f8d3eacd2.slice/crio-d622f3b42ad58c98b7db6131fab6e124576bc2d231c033590d86e14113691e35 WatchSource:0}: Error finding container d622f3b42ad58c98b7db6131fab6e124576bc2d231c033590d86e14113691e35: Status 404 returned error can't find the container with id d622f3b42ad58c98b7db6131fab6e124576bc2d231c033590d86e14113691e35 Jan 23 09:31:36 crc kubenswrapper[4982]: I0123 09:31:36.094716 4982 generic.go:334] "Generic (PLEG): container finished" podID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerID="dce96a585256751a3285f1ca1e863a68c01c30f36362107e0b17900271210e6b" exitCode=0 Jan 23 09:31:36 crc kubenswrapper[4982]: I0123 09:31:36.095594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" event={"ID":"78597e4e-a98c-4321-b6b8-fcd131d271e5","Type":"ContainerDied","Data":"dce96a585256751a3285f1ca1e863a68c01c30f36362107e0b17900271210e6b"} Jan 23 09:31:36 crc kubenswrapper[4982]: I0123 09:31:36.095664 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" event={"ID":"78597e4e-a98c-4321-b6b8-fcd131d271e5","Type":"ContainerStarted","Data":"d04ea37a9e7dca3580af964d37523f59912b76c90a01890ecb2f7acbfe5e2438"} Jan 23 09:31:36 crc kubenswrapper[4982]: I0123 09:31:36.100918 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2","Type":"ContainerStarted","Data":"d622f3b42ad58c98b7db6131fab6e124576bc2d231c033590d86e14113691e35"} Jan 23 09:31:36 crc kubenswrapper[4982]: I0123 09:31:36.624224 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.121273 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2","Type":"ContainerStarted","Data":"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647"} Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.121329 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2","Type":"ContainerStarted","Data":"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447"} Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.121387 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api-log" containerID="cri-o://9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447" gracePeriod=30 Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.121443 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api" containerID="cri-o://e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647" gracePeriod=30 Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.121446 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.124271 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" event={"ID":"78597e4e-a98c-4321-b6b8-fcd131d271e5","Type":"ContainerStarted","Data":"bbbcce015bc58d99e9f1c649e927cd33ac886b64701f736d5c5655ca6f5b6e29"} Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.147050 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.147019401 podStartE2EDuration="3.147019401s" podCreationTimestamp="2026-01-23 09:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:37.141661716 +0000 UTC m=+5771.622025122" watchObservedRunningTime="2026-01-23 09:31:37.147019401 +0000 UTC m=+5771.627382977" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.164264 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" podStartSLOduration=3.164240799 podStartE2EDuration="3.164240799s" podCreationTimestamp="2026-01-23 09:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:37.164222379 +0000 UTC m=+5771.644585765" watchObservedRunningTime="2026-01-23 09:31:37.164240799 +0000 UTC m=+5771.644604185" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.787060 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.894813 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data-custom\") pod \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.894949 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-etc-machine-id\") pod \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.894986 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data\") pod \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.895042 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" (UID: "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.895075 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffccr\" (UniqueName: \"kubernetes.io/projected/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-kube-api-access-ffccr\") pod \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.895130 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-scripts\") pod \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.895208 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-combined-ca-bundle\") pod \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.895289 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-logs\") pod \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\" (UID: \"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2\") " Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.895807 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.896449 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-logs" (OuterVolumeSpecName: "logs") pod "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" (UID: "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.901464 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" (UID: "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.902826 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-kube-api-access-ffccr" (OuterVolumeSpecName: "kube-api-access-ffccr") pod "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" (UID: "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2"). InnerVolumeSpecName "kube-api-access-ffccr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.904203 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-scripts" (OuterVolumeSpecName: "scripts") pod "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" (UID: "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.926545 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" (UID: "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.970674 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data" (OuterVolumeSpecName: "config-data") pod "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" (UID: "fdd865fd-45f5-4ef2-ab50-875f8d3eacd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.997721 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffccr\" (UniqueName: \"kubernetes.io/projected/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-kube-api-access-ffccr\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.997760 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.997772 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.997782 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.997793 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:37 crc kubenswrapper[4982]: I0123 09:31:37.997802 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135164 4982 generic.go:334] "Generic (PLEG): container finished" podID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerID="e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647" exitCode=0 Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135195 4982 generic.go:334] "Generic (PLEG): container finished" podID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerID="9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447" exitCode=143 Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135251 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135253 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2","Type":"ContainerDied","Data":"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647"} Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135333 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2","Type":"ContainerDied","Data":"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447"} Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135386 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fdd865fd-45f5-4ef2-ab50-875f8d3eacd2","Type":"ContainerDied","Data":"d622f3b42ad58c98b7db6131fab6e124576bc2d231c033590d86e14113691e35"} Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135402 4982 scope.go:117] "RemoveContainer" containerID="e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.135596 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.165563 4982 scope.go:117] "RemoveContainer" containerID="9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.169618 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.181601 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.189744 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:38 crc kubenswrapper[4982]: E0123 09:31:38.190157 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api-log" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.190169 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api-log" Jan 23 09:31:38 crc kubenswrapper[4982]: E0123 09:31:38.190204 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.190211 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.190385 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.190409 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" containerName="cinder-api-log" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.191449 4982 scope.go:117] "RemoveContainer" containerID="e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.193407 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: E0123 09:31:38.195410 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647\": container with ID starting with e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647 not found: ID does not exist" containerID="e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.195444 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647"} err="failed to get container status \"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647\": rpc error: code = NotFound desc = could not find container \"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647\": container with ID starting with e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647 not found: ID does not exist" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.195469 4982 scope.go:117] "RemoveContainer" containerID="9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447" Jan 23 09:31:38 crc kubenswrapper[4982]: E0123 09:31:38.195789 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447\": container with ID starting with 9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447 not found: ID does not exist" containerID="9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.195837 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447"} err="failed to get container status \"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447\": rpc error: code = NotFound desc = could not find container \"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447\": container with ID starting with 9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447 not found: ID does not exist" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.195870 4982 scope.go:117] "RemoveContainer" containerID="e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.196181 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647"} err="failed to get container status \"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647\": rpc error: code = NotFound desc = could not find container \"e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647\": container with ID starting with e0844a73d7d9ab508591ccdf39b766ba3f905d0167378312a788c671c7d83647 not found: ID does not exist" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.196205 4982 scope.go:117] "RemoveContainer" containerID="9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.196456 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447"} err="failed to get container status \"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447\": rpc error: code = NotFound desc = could not find container \"9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447\": container with ID starting with 9c52c83ca0642dcea9fe247754d02c72d70415cea5221d5d463b66a3a4892447 not found: ID does not exist" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.205730 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.205822 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.206159 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.206304 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nbrsv" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.206821 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.207696 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.208527 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210139 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xwp\" (UniqueName: \"kubernetes.io/projected/3121636d-aae7-4db4-a964-66e6825b1c96-kube-api-access-j5xwp\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210203 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data-custom\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210250 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210275 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210348 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3121636d-aae7-4db4-a964-66e6825b1c96-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210414 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210465 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-scripts\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210549 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.210604 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3121636d-aae7-4db4-a964-66e6825b1c96-logs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312251 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3121636d-aae7-4db4-a964-66e6825b1c96-logs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312322 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5xwp\" (UniqueName: \"kubernetes.io/projected/3121636d-aae7-4db4-a964-66e6825b1c96-kube-api-access-j5xwp\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312379 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data-custom\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312412 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312446 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312516 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3121636d-aae7-4db4-a964-66e6825b1c96-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312568 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312618 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-scripts\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.312690 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.313556 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3121636d-aae7-4db4-a964-66e6825b1c96-logs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.313661 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3121636d-aae7-4db4-a964-66e6825b1c96-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.317190 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-scripts\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.317681 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.318431 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-public-tls-certs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.318504 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.319084 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.319591 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data-custom\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.335056 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5xwp\" (UniqueName: \"kubernetes.io/projected/3121636d-aae7-4db4-a964-66e6825b1c96-kube-api-access-j5xwp\") pod \"cinder-api-0\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.520903 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:31:38 crc kubenswrapper[4982]: W0123 09:31:38.972090 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3121636d_aae7_4db4_a964_66e6825b1c96.slice/crio-9e896fbdc06dc9e735c0973222ff9ab7b0f73747e8328de2a1c9aaa0a3ad1078 WatchSource:0}: Error finding container 9e896fbdc06dc9e735c0973222ff9ab7b0f73747e8328de2a1c9aaa0a3ad1078: Status 404 returned error can't find the container with id 9e896fbdc06dc9e735c0973222ff9ab7b0f73747e8328de2a1c9aaa0a3ad1078 Jan 23 09:31:38 crc kubenswrapper[4982]: I0123 09:31:38.977823 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:31:39 crc kubenswrapper[4982]: I0123 09:31:39.150963 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3121636d-aae7-4db4-a964-66e6825b1c96","Type":"ContainerStarted","Data":"9e896fbdc06dc9e735c0973222ff9ab7b0f73747e8328de2a1c9aaa0a3ad1078"} Jan 23 09:31:39 crc kubenswrapper[4982]: I0123 09:31:39.842994 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd865fd-45f5-4ef2-ab50-875f8d3eacd2" path="/var/lib/kubelet/pods/fdd865fd-45f5-4ef2-ab50-875f8d3eacd2/volumes" Jan 23 09:31:40 crc kubenswrapper[4982]: I0123 09:31:40.174674 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3121636d-aae7-4db4-a964-66e6825b1c96","Type":"ContainerStarted","Data":"f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141"} Jan 23 09:31:40 crc kubenswrapper[4982]: I0123 09:31:40.827604 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:31:40 crc kubenswrapper[4982]: E0123 09:31:40.828128 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:31:41 crc kubenswrapper[4982]: I0123 09:31:41.187210 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3121636d-aae7-4db4-a964-66e6825b1c96","Type":"ContainerStarted","Data":"93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79"} Jan 23 09:31:41 crc kubenswrapper[4982]: I0123 09:31:41.187708 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 09:31:41 crc kubenswrapper[4982]: I0123 09:31:41.213792 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.213760367 podStartE2EDuration="3.213760367s" podCreationTimestamp="2026-01-23 09:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:31:41.204256869 +0000 UTC m=+5775.684620265" watchObservedRunningTime="2026-01-23 09:31:41.213760367 +0000 UTC m=+5775.694123763" Jan 23 09:31:43 crc kubenswrapper[4982]: I0123 09:31:43.759447 4982 scope.go:117] "RemoveContainer" containerID="3a77e4ee87e70a6dbb1528ce991a0bc9a7bb4e218d0598679de1cee0ff51ada8" Jan 23 09:31:44 crc kubenswrapper[4982]: I0123 09:31:44.783934 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:31:44 crc kubenswrapper[4982]: I0123 09:31:44.861975 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8696d78c59-6gzgr"] Jan 23 09:31:44 crc kubenswrapper[4982]: I0123 09:31:44.862222 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" podUID="c09db274-e506-4f64-8dfa-6358ed30a801" containerName="dnsmasq-dns" containerID="cri-o://013e3a8ede4cb46ae8fb5a53714cfa84d8a44224c6844e1eb773f23b4394b640" gracePeriod=10 Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.238732 4982 generic.go:334] "Generic (PLEG): container finished" podID="c09db274-e506-4f64-8dfa-6358ed30a801" containerID="013e3a8ede4cb46ae8fb5a53714cfa84d8a44224c6844e1eb773f23b4394b640" exitCode=0 Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.239035 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" event={"ID":"c09db274-e506-4f64-8dfa-6358ed30a801","Type":"ContainerDied","Data":"013e3a8ede4cb46ae8fb5a53714cfa84d8a44224c6844e1eb773f23b4394b640"} Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.354163 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.459280 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trsbj\" (UniqueName: \"kubernetes.io/projected/c09db274-e506-4f64-8dfa-6358ed30a801-kube-api-access-trsbj\") pod \"c09db274-e506-4f64-8dfa-6358ed30a801\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.459445 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-dns-svc\") pod \"c09db274-e506-4f64-8dfa-6358ed30a801\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.459499 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-nb\") pod \"c09db274-e506-4f64-8dfa-6358ed30a801\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.459549 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-config\") pod \"c09db274-e506-4f64-8dfa-6358ed30a801\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.459655 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-sb\") pod \"c09db274-e506-4f64-8dfa-6358ed30a801\" (UID: \"c09db274-e506-4f64-8dfa-6358ed30a801\") " Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.473262 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09db274-e506-4f64-8dfa-6358ed30a801-kube-api-access-trsbj" (OuterVolumeSpecName: "kube-api-access-trsbj") pod "c09db274-e506-4f64-8dfa-6358ed30a801" (UID: "c09db274-e506-4f64-8dfa-6358ed30a801"). InnerVolumeSpecName "kube-api-access-trsbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.521879 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-config" (OuterVolumeSpecName: "config") pod "c09db274-e506-4f64-8dfa-6358ed30a801" (UID: "c09db274-e506-4f64-8dfa-6358ed30a801"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.523529 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c09db274-e506-4f64-8dfa-6358ed30a801" (UID: "c09db274-e506-4f64-8dfa-6358ed30a801"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.533681 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c09db274-e506-4f64-8dfa-6358ed30a801" (UID: "c09db274-e506-4f64-8dfa-6358ed30a801"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.545230 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c09db274-e506-4f64-8dfa-6358ed30a801" (UID: "c09db274-e506-4f64-8dfa-6358ed30a801"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.562076 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trsbj\" (UniqueName: \"kubernetes.io/projected/c09db274-e506-4f64-8dfa-6358ed30a801-kube-api-access-trsbj\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.562116 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.562127 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.562135 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:45 crc kubenswrapper[4982]: I0123 09:31:45.562143 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c09db274-e506-4f64-8dfa-6358ed30a801-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:46 crc kubenswrapper[4982]: I0123 09:31:46.252981 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" event={"ID":"c09db274-e506-4f64-8dfa-6358ed30a801","Type":"ContainerDied","Data":"f89248a1824357d13a472830d39195693db36d95271042b4bb25ca2315068dfd"} Jan 23 09:31:46 crc kubenswrapper[4982]: I0123 09:31:46.253363 4982 scope.go:117] "RemoveContainer" containerID="013e3a8ede4cb46ae8fb5a53714cfa84d8a44224c6844e1eb773f23b4394b640" Jan 23 09:31:46 crc kubenswrapper[4982]: I0123 09:31:46.253113 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8696d78c59-6gzgr" Jan 23 09:31:46 crc kubenswrapper[4982]: I0123 09:31:46.280815 4982 scope.go:117] "RemoveContainer" containerID="73578c22f441c45208a7d981d741a7f1aed8f9ce6cb3ca368a723ff3a657b272" Jan 23 09:31:46 crc kubenswrapper[4982]: I0123 09:31:46.287206 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8696d78c59-6gzgr"] Jan 23 09:31:46 crc kubenswrapper[4982]: I0123 09:31:46.307561 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8696d78c59-6gzgr"] Jan 23 09:31:47 crc kubenswrapper[4982]: I0123 09:31:47.846490 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09db274-e506-4f64-8dfa-6358ed30a801" path="/var/lib/kubelet/pods/c09db274-e506-4f64-8dfa-6358ed30a801/volumes" Jan 23 09:31:50 crc kubenswrapper[4982]: I0123 09:31:50.398517 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 09:31:52 crc kubenswrapper[4982]: I0123 09:31:52.827711 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:31:52 crc kubenswrapper[4982]: E0123 09:31:52.828282 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:32:03 crc kubenswrapper[4982]: I0123 09:32:03.829054 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:32:03 crc kubenswrapper[4982]: E0123 09:32:03.830121 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.102889 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:07 crc kubenswrapper[4982]: E0123 09:32:07.103655 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09db274-e506-4f64-8dfa-6358ed30a801" containerName="init" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.103671 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09db274-e506-4f64-8dfa-6358ed30a801" containerName="init" Jan 23 09:32:07 crc kubenswrapper[4982]: E0123 09:32:07.103697 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09db274-e506-4f64-8dfa-6358ed30a801" containerName="dnsmasq-dns" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.103703 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09db274-e506-4f64-8dfa-6358ed30a801" containerName="dnsmasq-dns" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.103959 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09db274-e506-4f64-8dfa-6358ed30a801" containerName="dnsmasq-dns" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.105308 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.108129 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.160590 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.254397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72g74\" (UniqueName: \"kubernetes.io/projected/d96891b3-949c-4b24-8eb1-38c835e58c23-kube-api-access-72g74\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.255121 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-scripts\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.255164 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.255189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.255403 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d96891b3-949c-4b24-8eb1-38c835e58c23-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.255530 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.358055 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d96891b3-949c-4b24-8eb1-38c835e58c23-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.358362 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.358545 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72g74\" (UniqueName: \"kubernetes.io/projected/d96891b3-949c-4b24-8eb1-38c835e58c23-kube-api-access-72g74\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.358671 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-scripts\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.358785 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.358867 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.358146 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d96891b3-949c-4b24-8eb1-38c835e58c23-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.366276 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.366508 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.366951 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-scripts\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.367743 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.376262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72g74\" (UniqueName: \"kubernetes.io/projected/d96891b3-949c-4b24-8eb1-38c835e58c23-kube-api-access-72g74\") pod \"cinder-scheduler-0\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.428822 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:32:07 crc kubenswrapper[4982]: I0123 09:32:07.893113 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:08 crc kubenswrapper[4982]: I0123 09:32:08.474612 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d96891b3-949c-4b24-8eb1-38c835e58c23","Type":"ContainerStarted","Data":"d7639e1f47326f316b0b2caa1c6dab357295854c118ed23af8b7ab9a9f8c1f83"} Jan 23 09:32:08 crc kubenswrapper[4982]: I0123 09:32:08.588746 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:32:08 crc kubenswrapper[4982]: I0123 09:32:08.589264 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api-log" containerID="cri-o://f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141" gracePeriod=30 Jan 23 09:32:08 crc kubenswrapper[4982]: I0123 09:32:08.589881 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api" containerID="cri-o://93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79" gracePeriod=30 Jan 23 09:32:09 crc kubenswrapper[4982]: I0123 09:32:09.485319 4982 generic.go:334] "Generic (PLEG): container finished" podID="3121636d-aae7-4db4-a964-66e6825b1c96" containerID="f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141" exitCode=143 Jan 23 09:32:09 crc kubenswrapper[4982]: I0123 09:32:09.485519 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3121636d-aae7-4db4-a964-66e6825b1c96","Type":"ContainerDied","Data":"f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141"} Jan 23 09:32:09 crc kubenswrapper[4982]: I0123 09:32:09.487739 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d96891b3-949c-4b24-8eb1-38c835e58c23","Type":"ContainerStarted","Data":"7ce08247d772506d9d61c3d372e7385afa1f8dcc459c8b789e0839a73d8bb7b2"} Jan 23 09:32:09 crc kubenswrapper[4982]: I0123 09:32:09.487782 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d96891b3-949c-4b24-8eb1-38c835e58c23","Type":"ContainerStarted","Data":"74914c6fb999f37ef68eb94186648cdb013fcdfa39f62ea7f3a4818f0ef656b9"} Jan 23 09:32:09 crc kubenswrapper[4982]: I0123 09:32:09.515758 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.515740239 podStartE2EDuration="2.515740239s" podCreationTimestamp="2026-01-23 09:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:09.510166787 +0000 UTC m=+5803.990530173" watchObservedRunningTime="2026-01-23 09:32:09.515740239 +0000 UTC m=+5803.996103625" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.191416 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270700 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-scripts\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270743 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3121636d-aae7-4db4-a964-66e6825b1c96-logs\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270792 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-public-tls-certs\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270819 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xwp\" (UniqueName: \"kubernetes.io/projected/3121636d-aae7-4db4-a964-66e6825b1c96-kube-api-access-j5xwp\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270843 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-combined-ca-bundle\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270882 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270943 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data-custom\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.270963 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-internal-tls-certs\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.271005 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3121636d-aae7-4db4-a964-66e6825b1c96-etc-machine-id\") pod \"3121636d-aae7-4db4-a964-66e6825b1c96\" (UID: \"3121636d-aae7-4db4-a964-66e6825b1c96\") " Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.271407 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3121636d-aae7-4db4-a964-66e6825b1c96-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.271846 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3121636d-aae7-4db4-a964-66e6825b1c96-logs" (OuterVolumeSpecName: "logs") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.314013 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3121636d-aae7-4db4-a964-66e6825b1c96-kube-api-access-j5xwp" (OuterVolumeSpecName: "kube-api-access-j5xwp") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "kube-api-access-j5xwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.314509 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.327165 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-scripts" (OuterVolumeSpecName: "scripts") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.330490 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.338683 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data" (OuterVolumeSpecName: "config-data") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.343240 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.357033 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3121636d-aae7-4db4-a964-66e6825b1c96" (UID: "3121636d-aae7-4db4-a964-66e6825b1c96"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376716 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376758 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3121636d-aae7-4db4-a964-66e6825b1c96-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376776 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376790 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5xwp\" (UniqueName: \"kubernetes.io/projected/3121636d-aae7-4db4-a964-66e6825b1c96-kube-api-access-j5xwp\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376801 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376912 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376968 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376981 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3121636d-aae7-4db4-a964-66e6825b1c96-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.376993 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3121636d-aae7-4db4-a964-66e6825b1c96-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.429508 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.516641 4982 generic.go:334] "Generic (PLEG): container finished" podID="3121636d-aae7-4db4-a964-66e6825b1c96" containerID="93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79" exitCode=0 Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.516687 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3121636d-aae7-4db4-a964-66e6825b1c96","Type":"ContainerDied","Data":"93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79"} Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.516713 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3121636d-aae7-4db4-a964-66e6825b1c96","Type":"ContainerDied","Data":"9e896fbdc06dc9e735c0973222ff9ab7b0f73747e8328de2a1c9aaa0a3ad1078"} Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.516730 4982 scope.go:117] "RemoveContainer" containerID="93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.516857 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.560059 4982 scope.go:117] "RemoveContainer" containerID="f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.560586 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.580083 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.594915 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:32:12 crc kubenswrapper[4982]: E0123 09:32:12.595454 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api-log" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.595472 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api-log" Jan 23 09:32:12 crc kubenswrapper[4982]: E0123 09:32:12.595496 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.595502 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.595911 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.595927 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" containerName="cinder-api-log" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.597022 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.603431 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.603734 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.603860 4982 scope.go:117] "RemoveContainer" containerID="93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.604129 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 09:32:12 crc kubenswrapper[4982]: E0123 09:32:12.604839 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79\": container with ID starting with 93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79 not found: ID does not exist" containerID="93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.604881 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79"} err="failed to get container status \"93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79\": rpc error: code = NotFound desc = could not find container \"93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79\": container with ID starting with 93ce681e95252097e42b569466b282e6fbb8d77b5ce2b87eb0892a078fa39a79 not found: ID does not exist" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.604913 4982 scope.go:117] "RemoveContainer" containerID="f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141" Jan 23 09:32:12 crc kubenswrapper[4982]: E0123 09:32:12.605204 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141\": container with ID starting with f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141 not found: ID does not exist" containerID="f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.605226 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141"} err="failed to get container status \"f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141\": rpc error: code = NotFound desc = could not find container \"f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141\": container with ID starting with f9e69b8d47e988a5311c82ee94d410f88a2012b39bdc620979a2e838ef7f2141 not found: ID does not exist" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.615492 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.682806 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.682856 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71ec0e1-97be-48f5-9601-93cdf12b89a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.682880 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.682928 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqn6v\" (UniqueName: \"kubernetes.io/projected/f71ec0e1-97be-48f5-9601-93cdf12b89a6-kube-api-access-wqn6v\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.682963 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-scripts\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.682988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.683011 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f71ec0e1-97be-48f5-9601-93cdf12b89a6-logs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.683031 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-config-data\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.683047 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784430 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqn6v\" (UniqueName: \"kubernetes.io/projected/f71ec0e1-97be-48f5-9601-93cdf12b89a6-kube-api-access-wqn6v\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784523 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-scripts\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784563 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784596 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f71ec0e1-97be-48f5-9601-93cdf12b89a6-logs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784644 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-config-data\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784668 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784775 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784807 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71ec0e1-97be-48f5-9601-93cdf12b89a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.784831 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.785142 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f71ec0e1-97be-48f5-9601-93cdf12b89a6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.785462 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f71ec0e1-97be-48f5-9601-93cdf12b89a6-logs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.789723 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.789867 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-scripts\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.790000 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-config-data\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.791161 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.792335 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.793107 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f71ec0e1-97be-48f5-9601-93cdf12b89a6-config-data-custom\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.805030 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqn6v\" (UniqueName: \"kubernetes.io/projected/f71ec0e1-97be-48f5-9601-93cdf12b89a6-kube-api-access-wqn6v\") pod \"cinder-api-0\" (UID: \"f71ec0e1-97be-48f5-9601-93cdf12b89a6\") " pod="openstack/cinder-api-0" Jan 23 09:32:12 crc kubenswrapper[4982]: I0123 09:32:12.926101 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 09:32:13 crc kubenswrapper[4982]: I0123 09:32:13.367492 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 09:32:13 crc kubenswrapper[4982]: I0123 09:32:13.531150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f71ec0e1-97be-48f5-9601-93cdf12b89a6","Type":"ContainerStarted","Data":"b2209ff2412566b0c071f0c8b3a5ca63773964af849ae9721e0e11e937925fd4"} Jan 23 09:32:13 crc kubenswrapper[4982]: I0123 09:32:13.838462 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3121636d-aae7-4db4-a964-66e6825b1c96" path="/var/lib/kubelet/pods/3121636d-aae7-4db4-a964-66e6825b1c96/volumes" Jan 23 09:32:14 crc kubenswrapper[4982]: I0123 09:32:14.543588 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f71ec0e1-97be-48f5-9601-93cdf12b89a6","Type":"ContainerStarted","Data":"f729ffc0c46803c05823f3243192eaf31e6c001ec55da6f1ebc72e89d1996eae"} Jan 23 09:32:14 crc kubenswrapper[4982]: I0123 09:32:14.544690 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 09:32:14 crc kubenswrapper[4982]: I0123 09:32:14.544712 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f71ec0e1-97be-48f5-9601-93cdf12b89a6","Type":"ContainerStarted","Data":"1dd5967ba25b7b7701ceb30f6083768cbd3cf89b5b515789c4b1bd24af198006"} Jan 23 09:32:14 crc kubenswrapper[4982]: I0123 09:32:14.572520 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.572496584 podStartE2EDuration="2.572496584s" podCreationTimestamp="2026-01-23 09:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:14.561416523 +0000 UTC m=+5809.041779909" watchObservedRunningTime="2026-01-23 09:32:14.572496584 +0000 UTC m=+5809.052859970" Jan 23 09:32:17 crc kubenswrapper[4982]: I0123 09:32:17.630815 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 09:32:17 crc kubenswrapper[4982]: I0123 09:32:17.693775 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:17 crc kubenswrapper[4982]: I0123 09:32:17.827906 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:32:17 crc kubenswrapper[4982]: E0123 09:32:17.828374 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:32:18 crc kubenswrapper[4982]: I0123 09:32:18.581375 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="cinder-scheduler" containerID="cri-o://74914c6fb999f37ef68eb94186648cdb013fcdfa39f62ea7f3a4818f0ef656b9" gracePeriod=30 Jan 23 09:32:18 crc kubenswrapper[4982]: I0123 09:32:18.581538 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="probe" containerID="cri-o://7ce08247d772506d9d61c3d372e7385afa1f8dcc459c8b789e0839a73d8bb7b2" gracePeriod=30 Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.611823 4982 generic.go:334] "Generic (PLEG): container finished" podID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerID="7ce08247d772506d9d61c3d372e7385afa1f8dcc459c8b789e0839a73d8bb7b2" exitCode=0 Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.612230 4982 generic.go:334] "Generic (PLEG): container finished" podID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerID="74914c6fb999f37ef68eb94186648cdb013fcdfa39f62ea7f3a4818f0ef656b9" exitCode=0 Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.611927 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d96891b3-949c-4b24-8eb1-38c835e58c23","Type":"ContainerDied","Data":"7ce08247d772506d9d61c3d372e7385afa1f8dcc459c8b789e0839a73d8bb7b2"} Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.612300 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d96891b3-949c-4b24-8eb1-38c835e58c23","Type":"ContainerDied","Data":"74914c6fb999f37ef68eb94186648cdb013fcdfa39f62ea7f3a4818f0ef656b9"} Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.749417 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.851420 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data-custom\") pod \"d96891b3-949c-4b24-8eb1-38c835e58c23\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.851892 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-scripts\") pod \"d96891b3-949c-4b24-8eb1-38c835e58c23\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.851935 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-combined-ca-bundle\") pod \"d96891b3-949c-4b24-8eb1-38c835e58c23\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.851962 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d96891b3-949c-4b24-8eb1-38c835e58c23-etc-machine-id\") pod \"d96891b3-949c-4b24-8eb1-38c835e58c23\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.852006 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data\") pod \"d96891b3-949c-4b24-8eb1-38c835e58c23\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.852084 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72g74\" (UniqueName: \"kubernetes.io/projected/d96891b3-949c-4b24-8eb1-38c835e58c23-kube-api-access-72g74\") pod \"d96891b3-949c-4b24-8eb1-38c835e58c23\" (UID: \"d96891b3-949c-4b24-8eb1-38c835e58c23\") " Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.852085 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d96891b3-949c-4b24-8eb1-38c835e58c23-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d96891b3-949c-4b24-8eb1-38c835e58c23" (UID: "d96891b3-949c-4b24-8eb1-38c835e58c23"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.852731 4982 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d96891b3-949c-4b24-8eb1-38c835e58c23-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.857501 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-scripts" (OuterVolumeSpecName: "scripts") pod "d96891b3-949c-4b24-8eb1-38c835e58c23" (UID: "d96891b3-949c-4b24-8eb1-38c835e58c23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.860851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d96891b3-949c-4b24-8eb1-38c835e58c23" (UID: "d96891b3-949c-4b24-8eb1-38c835e58c23"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.867997 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96891b3-949c-4b24-8eb1-38c835e58c23-kube-api-access-72g74" (OuterVolumeSpecName: "kube-api-access-72g74") pod "d96891b3-949c-4b24-8eb1-38c835e58c23" (UID: "d96891b3-949c-4b24-8eb1-38c835e58c23"). InnerVolumeSpecName "kube-api-access-72g74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.905979 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d96891b3-949c-4b24-8eb1-38c835e58c23" (UID: "d96891b3-949c-4b24-8eb1-38c835e58c23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.955720 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.956555 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.956584 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72g74\" (UniqueName: \"kubernetes.io/projected/d96891b3-949c-4b24-8eb1-38c835e58c23-kube-api-access-72g74\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.956596 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:19 crc kubenswrapper[4982]: I0123 09:32:19.979582 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data" (OuterVolumeSpecName: "config-data") pod "d96891b3-949c-4b24-8eb1-38c835e58c23" (UID: "d96891b3-949c-4b24-8eb1-38c835e58c23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.058021 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d96891b3-949c-4b24-8eb1-38c835e58c23-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.624381 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d96891b3-949c-4b24-8eb1-38c835e58c23","Type":"ContainerDied","Data":"d7639e1f47326f316b0b2caa1c6dab357295854c118ed23af8b7ab9a9f8c1f83"} Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.624434 4982 scope.go:117] "RemoveContainer" containerID="7ce08247d772506d9d61c3d372e7385afa1f8dcc459c8b789e0839a73d8bb7b2" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.624476 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.656564 4982 scope.go:117] "RemoveContainer" containerID="74914c6fb999f37ef68eb94186648cdb013fcdfa39f62ea7f3a4818f0ef656b9" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.665039 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.677322 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.694706 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:20 crc kubenswrapper[4982]: E0123 09:32:20.695212 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="probe" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.695236 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="probe" Jan 23 09:32:20 crc kubenswrapper[4982]: E0123 09:32:20.695268 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="cinder-scheduler" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.695276 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="cinder-scheduler" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.695492 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="probe" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.695513 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" containerName="cinder-scheduler" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.698110 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.700788 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.712165 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.875973 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-scripts\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.876022 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdxp\" (UniqueName: \"kubernetes.io/projected/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-kube-api-access-dtdxp\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.876056 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.876100 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-config-data\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.876332 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.876404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.977985 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-scripts\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.978032 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtdxp\" (UniqueName: \"kubernetes.io/projected/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-kube-api-access-dtdxp\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.978062 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.978104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-config-data\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.978210 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.978235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.978277 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.981505 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.981986 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-config-data\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.982787 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.983584 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-scripts\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:20 crc kubenswrapper[4982]: I0123 09:32:20.996055 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtdxp\" (UniqueName: \"kubernetes.io/projected/4a93b5e2-a1b2-427f-8bc6-013bef4e3255-kube-api-access-dtdxp\") pod \"cinder-scheduler-0\" (UID: \"4a93b5e2-a1b2-427f-8bc6-013bef4e3255\") " pod="openstack/cinder-scheduler-0" Jan 23 09:32:21 crc kubenswrapper[4982]: I0123 09:32:21.027746 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 09:32:21 crc kubenswrapper[4982]: I0123 09:32:21.478257 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 09:32:21 crc kubenswrapper[4982]: I0123 09:32:21.637818 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4a93b5e2-a1b2-427f-8bc6-013bef4e3255","Type":"ContainerStarted","Data":"fd90901d2fa18e6121956f5e03e4e49b54c4e8e234db49fb18ede9b28ceef287"} Jan 23 09:32:21 crc kubenswrapper[4982]: I0123 09:32:21.840723 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96891b3-949c-4b24-8eb1-38c835e58c23" path="/var/lib/kubelet/pods/d96891b3-949c-4b24-8eb1-38c835e58c23/volumes" Jan 23 09:32:22 crc kubenswrapper[4982]: I0123 09:32:22.666925 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4a93b5e2-a1b2-427f-8bc6-013bef4e3255","Type":"ContainerStarted","Data":"8641f507ee0fe1d42c57899f288a405a75946fadf36bca7b44bdfaa05f8d9581"} Jan 23 09:32:22 crc kubenswrapper[4982]: I0123 09:32:22.668185 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4a93b5e2-a1b2-427f-8bc6-013bef4e3255","Type":"ContainerStarted","Data":"54c63f5f3b94c90cbc84ff426d693fb523ffb0bd40863f1853c612ccf7420c24"} Jan 23 09:32:22 crc kubenswrapper[4982]: I0123 09:32:22.690502 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.690480574 podStartE2EDuration="2.690480574s" podCreationTimestamp="2026-01-23 09:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:22.687720679 +0000 UTC m=+5817.168084075" watchObservedRunningTime="2026-01-23 09:32:22.690480574 +0000 UTC m=+5817.170843960" Jan 23 09:32:25 crc kubenswrapper[4982]: I0123 09:32:25.124173 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 09:32:26 crc kubenswrapper[4982]: I0123 09:32:26.028900 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 09:32:26 crc kubenswrapper[4982]: I0123 09:32:26.281920 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 09:32:30 crc kubenswrapper[4982]: I0123 09:32:30.826737 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:32:30 crc kubenswrapper[4982]: E0123 09:32:30.827520 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.672179 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vh2xw"] Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.674018 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.684550 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vh2xw"] Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.770259 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-707c-account-create-update-xm4pw"] Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.771840 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.774916 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.781905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6319f407-c75a-4ff2-bbc2-5bbd285e941a-operator-scripts\") pod \"glance-db-create-vh2xw\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.782009 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4gt\" (UniqueName: \"kubernetes.io/projected/6319f407-c75a-4ff2-bbc2-5bbd285e941a-kube-api-access-nn4gt\") pod \"glance-db-create-vh2xw\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.791595 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-707c-account-create-update-xm4pw"] Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.884078 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn4gt\" (UniqueName: \"kubernetes.io/projected/6319f407-c75a-4ff2-bbc2-5bbd285e941a-kube-api-access-nn4gt\") pod \"glance-db-create-vh2xw\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.884135 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpphz\" (UniqueName: \"kubernetes.io/projected/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-kube-api-access-lpphz\") pod \"glance-707c-account-create-update-xm4pw\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.884271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6319f407-c75a-4ff2-bbc2-5bbd285e941a-operator-scripts\") pod \"glance-db-create-vh2xw\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.884327 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-operator-scripts\") pod \"glance-707c-account-create-update-xm4pw\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.885141 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6319f407-c75a-4ff2-bbc2-5bbd285e941a-operator-scripts\") pod \"glance-db-create-vh2xw\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.903794 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn4gt\" (UniqueName: \"kubernetes.io/projected/6319f407-c75a-4ff2-bbc2-5bbd285e941a-kube-api-access-nn4gt\") pod \"glance-db-create-vh2xw\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.985950 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpphz\" (UniqueName: \"kubernetes.io/projected/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-kube-api-access-lpphz\") pod \"glance-707c-account-create-update-xm4pw\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.986343 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-operator-scripts\") pod \"glance-707c-account-create-update-xm4pw\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.987981 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-operator-scripts\") pod \"glance-707c-account-create-update-xm4pw\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:33 crc kubenswrapper[4982]: I0123 09:32:33.996821 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.009254 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpphz\" (UniqueName: \"kubernetes.io/projected/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-kube-api-access-lpphz\") pod \"glance-707c-account-create-update-xm4pw\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.094109 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.501428 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vh2xw"] Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.613443 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-707c-account-create-update-xm4pw"] Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.794701 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-707c-account-create-update-xm4pw" event={"ID":"a94468e3-2ef9-4eaf-bd81-cca4b468ae38","Type":"ContainerStarted","Data":"7d598171eee93d8332de69684b62ed151c40642e141f1567b1229f83e8aa81de"} Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.796612 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vh2xw" event={"ID":"6319f407-c75a-4ff2-bbc2-5bbd285e941a","Type":"ContainerStarted","Data":"6e584a8bb1a63e2a0fd48284c29842f5e3953c3be66c965359ecc655c32f9d3a"} Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.796688 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vh2xw" event={"ID":"6319f407-c75a-4ff2-bbc2-5bbd285e941a","Type":"ContainerStarted","Data":"35b0891b08f4d95164f366625902d1dc263c78af7206ece265eb0f11a3b2ff12"} Jan 23 09:32:34 crc kubenswrapper[4982]: I0123 09:32:34.821417 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vh2xw" podStartSLOduration=1.821394657 podStartE2EDuration="1.821394657s" podCreationTimestamp="2026-01-23 09:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:34.811387036 +0000 UTC m=+5829.291750422" watchObservedRunningTime="2026-01-23 09:32:34.821394657 +0000 UTC m=+5829.301758043" Jan 23 09:32:35 crc kubenswrapper[4982]: I0123 09:32:35.809469 4982 generic.go:334] "Generic (PLEG): container finished" podID="6319f407-c75a-4ff2-bbc2-5bbd285e941a" containerID="6e584a8bb1a63e2a0fd48284c29842f5e3953c3be66c965359ecc655c32f9d3a" exitCode=0 Jan 23 09:32:35 crc kubenswrapper[4982]: I0123 09:32:35.809556 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vh2xw" event={"ID":"6319f407-c75a-4ff2-bbc2-5bbd285e941a","Type":"ContainerDied","Data":"6e584a8bb1a63e2a0fd48284c29842f5e3953c3be66c965359ecc655c32f9d3a"} Jan 23 09:32:35 crc kubenswrapper[4982]: I0123 09:32:35.811384 4982 generic.go:334] "Generic (PLEG): container finished" podID="a94468e3-2ef9-4eaf-bd81-cca4b468ae38" containerID="8e9b8a9c19ea324df867283f49f79f7dd97c3b8dc741f123409b2afef0bd82b8" exitCode=0 Jan 23 09:32:35 crc kubenswrapper[4982]: I0123 09:32:35.811430 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-707c-account-create-update-xm4pw" event={"ID":"a94468e3-2ef9-4eaf-bd81-cca4b468ae38","Type":"ContainerDied","Data":"8e9b8a9c19ea324df867283f49f79f7dd97c3b8dc741f123409b2afef0bd82b8"} Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.220080 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.229250 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.358864 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn4gt\" (UniqueName: \"kubernetes.io/projected/6319f407-c75a-4ff2-bbc2-5bbd285e941a-kube-api-access-nn4gt\") pod \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.359446 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-operator-scripts\") pod \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.359510 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpphz\" (UniqueName: \"kubernetes.io/projected/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-kube-api-access-lpphz\") pod \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\" (UID: \"a94468e3-2ef9-4eaf-bd81-cca4b468ae38\") " Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.359588 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6319f407-c75a-4ff2-bbc2-5bbd285e941a-operator-scripts\") pod \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\" (UID: \"6319f407-c75a-4ff2-bbc2-5bbd285e941a\") " Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.360073 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6319f407-c75a-4ff2-bbc2-5bbd285e941a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6319f407-c75a-4ff2-bbc2-5bbd285e941a" (UID: "6319f407-c75a-4ff2-bbc2-5bbd285e941a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.360315 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a94468e3-2ef9-4eaf-bd81-cca4b468ae38" (UID: "a94468e3-2ef9-4eaf-bd81-cca4b468ae38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.368475 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6319f407-c75a-4ff2-bbc2-5bbd285e941a-kube-api-access-nn4gt" (OuterVolumeSpecName: "kube-api-access-nn4gt") pod "6319f407-c75a-4ff2-bbc2-5bbd285e941a" (UID: "6319f407-c75a-4ff2-bbc2-5bbd285e941a"). InnerVolumeSpecName "kube-api-access-nn4gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.368742 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-kube-api-access-lpphz" (OuterVolumeSpecName: "kube-api-access-lpphz") pod "a94468e3-2ef9-4eaf-bd81-cca4b468ae38" (UID: "a94468e3-2ef9-4eaf-bd81-cca4b468ae38"). InnerVolumeSpecName "kube-api-access-lpphz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.462099 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.462133 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpphz\" (UniqueName: \"kubernetes.io/projected/a94468e3-2ef9-4eaf-bd81-cca4b468ae38-kube-api-access-lpphz\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.462145 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6319f407-c75a-4ff2-bbc2-5bbd285e941a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.462155 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn4gt\" (UniqueName: \"kubernetes.io/projected/6319f407-c75a-4ff2-bbc2-5bbd285e941a-kube-api-access-nn4gt\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.834411 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vh2xw" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.835998 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-707c-account-create-update-xm4pw" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.846135 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vh2xw" event={"ID":"6319f407-c75a-4ff2-bbc2-5bbd285e941a","Type":"ContainerDied","Data":"35b0891b08f4d95164f366625902d1dc263c78af7206ece265eb0f11a3b2ff12"} Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.846177 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b0891b08f4d95164f366625902d1dc263c78af7206ece265eb0f11a3b2ff12" Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.846187 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-707c-account-create-update-xm4pw" event={"ID":"a94468e3-2ef9-4eaf-bd81-cca4b468ae38","Type":"ContainerDied","Data":"7d598171eee93d8332de69684b62ed151c40642e141f1567b1229f83e8aa81de"} Jan 23 09:32:37 crc kubenswrapper[4982]: I0123 09:32:37.846198 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d598171eee93d8332de69684b62ed151c40642e141f1567b1229f83e8aa81de" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.021756 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cb8gf"] Jan 23 09:32:39 crc kubenswrapper[4982]: E0123 09:32:39.022828 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a94468e3-2ef9-4eaf-bd81-cca4b468ae38" containerName="mariadb-account-create-update" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.022850 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a94468e3-2ef9-4eaf-bd81-cca4b468ae38" containerName="mariadb-account-create-update" Jan 23 09:32:39 crc kubenswrapper[4982]: E0123 09:32:39.022879 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6319f407-c75a-4ff2-bbc2-5bbd285e941a" containerName="mariadb-database-create" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.022889 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6319f407-c75a-4ff2-bbc2-5bbd285e941a" containerName="mariadb-database-create" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.023140 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6319f407-c75a-4ff2-bbc2-5bbd285e941a" containerName="mariadb-database-create" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.023160 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a94468e3-2ef9-4eaf-bd81-cca4b468ae38" containerName="mariadb-account-create-update" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.024001 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.030395 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.032348 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6562b" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.046924 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cb8gf"] Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.198751 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-combined-ca-bundle\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.198845 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-db-sync-config-data\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.198943 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zgkc\" (UniqueName: \"kubernetes.io/projected/402f0d81-2b8f-443d-bc52-d9e333a31306-kube-api-access-4zgkc\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.199017 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-config-data\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.301462 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zgkc\" (UniqueName: \"kubernetes.io/projected/402f0d81-2b8f-443d-bc52-d9e333a31306-kube-api-access-4zgkc\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.301848 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-config-data\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.302077 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-combined-ca-bundle\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.302233 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-db-sync-config-data\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.308738 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-db-sync-config-data\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.312551 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-config-data\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.313414 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-combined-ca-bundle\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.331345 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zgkc\" (UniqueName: \"kubernetes.io/projected/402f0d81-2b8f-443d-bc52-d9e333a31306-kube-api-access-4zgkc\") pod \"glance-db-sync-cb8gf\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.343054 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:39 crc kubenswrapper[4982]: W0123 09:32:39.911113 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod402f0d81_2b8f_443d_bc52_d9e333a31306.slice/crio-e098b21a5d316238d36c512b7789336d708e6a33e322c5c6e1ef9748e7be7d9e WatchSource:0}: Error finding container e098b21a5d316238d36c512b7789336d708e6a33e322c5c6e1ef9748e7be7d9e: Status 404 returned error can't find the container with id e098b21a5d316238d36c512b7789336d708e6a33e322c5c6e1ef9748e7be7d9e Jan 23 09:32:39 crc kubenswrapper[4982]: I0123 09:32:39.912663 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cb8gf"] Jan 23 09:32:40 crc kubenswrapper[4982]: I0123 09:32:40.860984 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cb8gf" event={"ID":"402f0d81-2b8f-443d-bc52-d9e333a31306","Type":"ContainerStarted","Data":"e098b21a5d316238d36c512b7789336d708e6a33e322c5c6e1ef9748e7be7d9e"} Jan 23 09:32:41 crc kubenswrapper[4982]: I0123 09:32:41.828231 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:32:41 crc kubenswrapper[4982]: E0123 09:32:41.828432 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:32:41 crc kubenswrapper[4982]: I0123 09:32:41.870567 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cb8gf" event={"ID":"402f0d81-2b8f-443d-bc52-d9e333a31306","Type":"ContainerStarted","Data":"43ef017812b2195689b5f221ee9b4e25977e91a029129610351a5c738e3c42b6"} Jan 23 09:32:41 crc kubenswrapper[4982]: I0123 09:32:41.892658 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cb8gf" podStartSLOduration=3.892635958 podStartE2EDuration="3.892635958s" podCreationTimestamp="2026-01-23 09:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:41.883753926 +0000 UTC m=+5836.364117322" watchObservedRunningTime="2026-01-23 09:32:41.892635958 +0000 UTC m=+5836.372999354" Jan 23 09:32:43 crc kubenswrapper[4982]: I0123 09:32:43.883879 4982 scope.go:117] "RemoveContainer" containerID="8db7576e463e870ae81c645fd1651f5b1e25f9a7e18a45f0121b9c5aa98bec7e" Jan 23 09:32:46 crc kubenswrapper[4982]: I0123 09:32:46.920767 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cb8gf" event={"ID":"402f0d81-2b8f-443d-bc52-d9e333a31306","Type":"ContainerDied","Data":"43ef017812b2195689b5f221ee9b4e25977e91a029129610351a5c738e3c42b6"} Jan 23 09:32:46 crc kubenswrapper[4982]: I0123 09:32:46.920783 4982 generic.go:334] "Generic (PLEG): container finished" podID="402f0d81-2b8f-443d-bc52-d9e333a31306" containerID="43ef017812b2195689b5f221ee9b4e25977e91a029129610351a5c738e3c42b6" exitCode=0 Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.424688 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.605963 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zgkc\" (UniqueName: \"kubernetes.io/projected/402f0d81-2b8f-443d-bc52-d9e333a31306-kube-api-access-4zgkc\") pod \"402f0d81-2b8f-443d-bc52-d9e333a31306\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.606018 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-config-data\") pod \"402f0d81-2b8f-443d-bc52-d9e333a31306\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.606074 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-db-sync-config-data\") pod \"402f0d81-2b8f-443d-bc52-d9e333a31306\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.606241 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-combined-ca-bundle\") pod \"402f0d81-2b8f-443d-bc52-d9e333a31306\" (UID: \"402f0d81-2b8f-443d-bc52-d9e333a31306\") " Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.611647 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "402f0d81-2b8f-443d-bc52-d9e333a31306" (UID: "402f0d81-2b8f-443d-bc52-d9e333a31306"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.621553 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402f0d81-2b8f-443d-bc52-d9e333a31306-kube-api-access-4zgkc" (OuterVolumeSpecName: "kube-api-access-4zgkc") pod "402f0d81-2b8f-443d-bc52-d9e333a31306" (UID: "402f0d81-2b8f-443d-bc52-d9e333a31306"). InnerVolumeSpecName "kube-api-access-4zgkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.638565 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "402f0d81-2b8f-443d-bc52-d9e333a31306" (UID: "402f0d81-2b8f-443d-bc52-d9e333a31306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.655154 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-config-data" (OuterVolumeSpecName: "config-data") pod "402f0d81-2b8f-443d-bc52-d9e333a31306" (UID: "402f0d81-2b8f-443d-bc52-d9e333a31306"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.708531 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zgkc\" (UniqueName: \"kubernetes.io/projected/402f0d81-2b8f-443d-bc52-d9e333a31306-kube-api-access-4zgkc\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.708581 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.708591 4982 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.708598 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/402f0d81-2b8f-443d-bc52-d9e333a31306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.944000 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cb8gf" event={"ID":"402f0d81-2b8f-443d-bc52-d9e333a31306","Type":"ContainerDied","Data":"e098b21a5d316238d36c512b7789336d708e6a33e322c5c6e1ef9748e7be7d9e"} Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.944365 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e098b21a5d316238d36c512b7789336d708e6a33e322c5c6e1ef9748e7be7d9e" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:48.944061 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cb8gf" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.286138 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:50 crc kubenswrapper[4982]: E0123 09:32:49.286588 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="402f0d81-2b8f-443d-bc52-d9e333a31306" containerName="glance-db-sync" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.286599 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="402f0d81-2b8f-443d-bc52-d9e333a31306" containerName="glance-db-sync" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.286855 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="402f0d81-2b8f-443d-bc52-d9e333a31306" containerName="glance-db-sync" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.287954 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.299149 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6562b" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.299367 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.299588 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.321033 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.322535 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.322698 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.322824 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-logs\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.322847 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.322871 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxg7v\" (UniqueName: \"kubernetes.io/projected/b23f583c-48c5-4805-accc-369d3adcbf7c-kube-api-access-wxg7v\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.322918 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.378870 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d8944c655-7dw6p"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.387412 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.423380 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8944c655-7dw6p"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425559 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-logs\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425614 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425665 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxg7v\" (UniqueName: \"kubernetes.io/projected/b23f583c-48c5-4805-accc-369d3adcbf7c-kube-api-access-wxg7v\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425700 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425755 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-dns-svc\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425792 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-config\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425829 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425890 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-sb\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425924 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-nb\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.425947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.426008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8bwm\" (UniqueName: \"kubernetes.io/projected/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-kube-api-access-t8bwm\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.427521 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-logs\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.428638 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.436368 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.449205 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.450413 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.459809 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxg7v\" (UniqueName: \"kubernetes.io/projected/b23f583c-48c5-4805-accc-369d3adcbf7c-kube-api-access-wxg7v\") pod \"glance-default-external-api-0\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.461795 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.463404 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.469087 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.474167 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.532724 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-nb\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.533045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8bwm\" (UniqueName: \"kubernetes.io/projected/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-kube-api-access-t8bwm\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.534954 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-nb\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.535422 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-dns-svc\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.535462 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-config\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.535531 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-sb\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.536283 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-config\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.536309 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-sb\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.536410 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-dns-svc\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.554974 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8bwm\" (UniqueName: \"kubernetes.io/projected/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-kube-api-access-t8bwm\") pod \"dnsmasq-dns-6d8944c655-7dw6p\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.625251 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.636719 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.636814 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.636854 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2x5c\" (UniqueName: \"kubernetes.io/projected/9a378114-4034-4d3d-abb4-b188c1ba8983-kube-api-access-p2x5c\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.636917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.636953 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.637005 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.723919 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.742076 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.742182 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.742235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.742307 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.742351 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2x5c\" (UniqueName: \"kubernetes.io/projected/9a378114-4034-4d3d-abb4-b188c1ba8983-kube-api-access-p2x5c\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.742399 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.743333 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-logs\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.743369 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.746665 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.747648 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.748398 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.777095 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2x5c\" (UniqueName: \"kubernetes.io/projected/9a378114-4034-4d3d-abb4-b188c1ba8983-kube-api-access-p2x5c\") pod \"glance-default-internal-api-0\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:49.892208 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:50.496142 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:50.778170 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8944c655-7dw6p"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:50.895229 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:50.972849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" event={"ID":"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670","Type":"ContainerStarted","Data":"a2b75843f260cd75e62971edb242d3eca81a4a2e906a4be5cd9c6387f6db274f"} Jan 23 09:32:50 crc kubenswrapper[4982]: I0123 09:32:50.979726 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a378114-4034-4d3d-abb4-b188c1ba8983","Type":"ContainerStarted","Data":"c0543606c266e49aeef6e53c59ef0c0ddd29a0a1809890e41197744be9b4d689"} Jan 23 09:32:51 crc kubenswrapper[4982]: I0123 09:32:51.534152 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:52 crc kubenswrapper[4982]: I0123 09:32:52.020419 4982 generic.go:334] "Generic (PLEG): container finished" podID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerID="587b6a9ef5393a6edbdba6f7fdcedc6c91dc0db18a6d8641be1ad9395bfc1426" exitCode=0 Jan 23 09:32:52 crc kubenswrapper[4982]: I0123 09:32:52.020927 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" event={"ID":"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670","Type":"ContainerDied","Data":"587b6a9ef5393a6edbdba6f7fdcedc6c91dc0db18a6d8641be1ad9395bfc1426"} Jan 23 09:32:52 crc kubenswrapper[4982]: I0123 09:32:52.042249 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b23f583c-48c5-4805-accc-369d3adcbf7c","Type":"ContainerStarted","Data":"5f9a21439c160d8df695be008ed30059cccb1082337ffd0a312782471a4c42a4"} Jan 23 09:32:52 crc kubenswrapper[4982]: I0123 09:32:52.048951 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a378114-4034-4d3d-abb4-b188c1ba8983","Type":"ContainerStarted","Data":"d045fc3f4ff6c5a4b407e0f271e3a7645abeeea94fb16146aabe5b7dd9160839"} Jan 23 09:32:52 crc kubenswrapper[4982]: I0123 09:32:52.112256 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.061160 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b23f583c-48c5-4805-accc-369d3adcbf7c","Type":"ContainerStarted","Data":"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2"} Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.061690 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b23f583c-48c5-4805-accc-369d3adcbf7c","Type":"ContainerStarted","Data":"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f"} Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.064810 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a378114-4034-4d3d-abb4-b188c1ba8983","Type":"ContainerStarted","Data":"ac1a36a5ed37b0a0aa899b3ebda3bb42d412dc38436ada99866d733bb080e509"} Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.064894 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-log" containerID="cri-o://d045fc3f4ff6c5a4b407e0f271e3a7645abeeea94fb16146aabe5b7dd9160839" gracePeriod=30 Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.064949 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-httpd" containerID="cri-o://ac1a36a5ed37b0a0aa899b3ebda3bb42d412dc38436ada99866d733bb080e509" gracePeriod=30 Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.067925 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" event={"ID":"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670","Type":"ContainerStarted","Data":"fdaccac52864b213eaac8e1ab4b07a36f5d543ee6ae50c304f92963a45b20e32"} Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.068105 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.091913 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.091890867 podStartE2EDuration="4.091890867s" podCreationTimestamp="2026-01-23 09:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:53.091345843 +0000 UTC m=+5847.571709229" watchObservedRunningTime="2026-01-23 09:32:53.091890867 +0000 UTC m=+5847.572254263" Jan 23 09:32:53 crc kubenswrapper[4982]: I0123 09:32:53.112860 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" podStartSLOduration=4.112836846 podStartE2EDuration="4.112836846s" podCreationTimestamp="2026-01-23 09:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:53.106826483 +0000 UTC m=+5847.587189869" watchObservedRunningTime="2026-01-23 09:32:53.112836846 +0000 UTC m=+5847.593200232" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.096293 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerID="ac1a36a5ed37b0a0aa899b3ebda3bb42d412dc38436ada99866d733bb080e509" exitCode=0 Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.096849 4982 generic.go:334] "Generic (PLEG): container finished" podID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerID="d045fc3f4ff6c5a4b407e0f271e3a7645abeeea94fb16146aabe5b7dd9160839" exitCode=143 Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.097030 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-log" containerID="cri-o://65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f" gracePeriod=30 Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.096385 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a378114-4034-4d3d-abb4-b188c1ba8983","Type":"ContainerDied","Data":"ac1a36a5ed37b0a0aa899b3ebda3bb42d412dc38436ada99866d733bb080e509"} Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.097143 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a378114-4034-4d3d-abb4-b188c1ba8983","Type":"ContainerDied","Data":"d045fc3f4ff6c5a4b407e0f271e3a7645abeeea94fb16146aabe5b7dd9160839"} Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.097805 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-httpd" containerID="cri-o://0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2" gracePeriod=30 Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.135897 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.135880193 podStartE2EDuration="5.135880193s" podCreationTimestamp="2026-01-23 09:32:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:54.13097105 +0000 UTC m=+5848.611334436" watchObservedRunningTime="2026-01-23 09:32:54.135880193 +0000 UTC m=+5848.616243579" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.191726 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.367982 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-scripts\") pod \"9a378114-4034-4d3d-abb4-b188c1ba8983\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.368055 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2x5c\" (UniqueName: \"kubernetes.io/projected/9a378114-4034-4d3d-abb4-b188c1ba8983-kube-api-access-p2x5c\") pod \"9a378114-4034-4d3d-abb4-b188c1ba8983\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.368254 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-combined-ca-bundle\") pod \"9a378114-4034-4d3d-abb4-b188c1ba8983\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.368285 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-logs\") pod \"9a378114-4034-4d3d-abb4-b188c1ba8983\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.368313 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-httpd-run\") pod \"9a378114-4034-4d3d-abb4-b188c1ba8983\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.368401 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-config-data\") pod \"9a378114-4034-4d3d-abb4-b188c1ba8983\" (UID: \"9a378114-4034-4d3d-abb4-b188c1ba8983\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.368745 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9a378114-4034-4d3d-abb4-b188c1ba8983" (UID: "9a378114-4034-4d3d-abb4-b188c1ba8983"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.368857 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-logs" (OuterVolumeSpecName: "logs") pod "9a378114-4034-4d3d-abb4-b188c1ba8983" (UID: "9a378114-4034-4d3d-abb4-b188c1ba8983"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.369525 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.369597 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9a378114-4034-4d3d-abb4-b188c1ba8983-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.378723 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a378114-4034-4d3d-abb4-b188c1ba8983-kube-api-access-p2x5c" (OuterVolumeSpecName: "kube-api-access-p2x5c") pod "9a378114-4034-4d3d-abb4-b188c1ba8983" (UID: "9a378114-4034-4d3d-abb4-b188c1ba8983"). InnerVolumeSpecName "kube-api-access-p2x5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.378881 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-scripts" (OuterVolumeSpecName: "scripts") pod "9a378114-4034-4d3d-abb4-b188c1ba8983" (UID: "9a378114-4034-4d3d-abb4-b188c1ba8983"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.399836 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a378114-4034-4d3d-abb4-b188c1ba8983" (UID: "9a378114-4034-4d3d-abb4-b188c1ba8983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.436827 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-config-data" (OuterVolumeSpecName: "config-data") pod "9a378114-4034-4d3d-abb4-b188c1ba8983" (UID: "9a378114-4034-4d3d-abb4-b188c1ba8983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.471271 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.471597 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.471607 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a378114-4034-4d3d-abb4-b188c1ba8983-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.471619 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2x5c\" (UniqueName: \"kubernetes.io/projected/9a378114-4034-4d3d-abb4-b188c1ba8983-kube-api-access-p2x5c\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.643968 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.775654 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxg7v\" (UniqueName: \"kubernetes.io/projected/b23f583c-48c5-4805-accc-369d3adcbf7c-kube-api-access-wxg7v\") pod \"b23f583c-48c5-4805-accc-369d3adcbf7c\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.775725 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-combined-ca-bundle\") pod \"b23f583c-48c5-4805-accc-369d3adcbf7c\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.775790 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-config-data\") pod \"b23f583c-48c5-4805-accc-369d3adcbf7c\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.775836 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-scripts\") pod \"b23f583c-48c5-4805-accc-369d3adcbf7c\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.775878 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-logs\") pod \"b23f583c-48c5-4805-accc-369d3adcbf7c\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.775934 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-httpd-run\") pod \"b23f583c-48c5-4805-accc-369d3adcbf7c\" (UID: \"b23f583c-48c5-4805-accc-369d3adcbf7c\") " Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.776358 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b23f583c-48c5-4805-accc-369d3adcbf7c" (UID: "b23f583c-48c5-4805-accc-369d3adcbf7c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.776538 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-logs" (OuterVolumeSpecName: "logs") pod "b23f583c-48c5-4805-accc-369d3adcbf7c" (UID: "b23f583c-48c5-4805-accc-369d3adcbf7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.778502 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.778599 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b23f583c-48c5-4805-accc-369d3adcbf7c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.779757 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-scripts" (OuterVolumeSpecName: "scripts") pod "b23f583c-48c5-4805-accc-369d3adcbf7c" (UID: "b23f583c-48c5-4805-accc-369d3adcbf7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.782190 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23f583c-48c5-4805-accc-369d3adcbf7c-kube-api-access-wxg7v" (OuterVolumeSpecName: "kube-api-access-wxg7v") pod "b23f583c-48c5-4805-accc-369d3adcbf7c" (UID: "b23f583c-48c5-4805-accc-369d3adcbf7c"). InnerVolumeSpecName "kube-api-access-wxg7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.808732 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b23f583c-48c5-4805-accc-369d3adcbf7c" (UID: "b23f583c-48c5-4805-accc-369d3adcbf7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.823438 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-config-data" (OuterVolumeSpecName: "config-data") pod "b23f583c-48c5-4805-accc-369d3adcbf7c" (UID: "b23f583c-48c5-4805-accc-369d3adcbf7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.828010 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:32:54 crc kubenswrapper[4982]: E0123 09:32:54.828220 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.881880 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxg7v\" (UniqueName: \"kubernetes.io/projected/b23f583c-48c5-4805-accc-369d3adcbf7c-kube-api-access-wxg7v\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.882567 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.882600 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:54 crc kubenswrapper[4982]: I0123 09:32:54.882610 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b23f583c-48c5-4805-accc-369d3adcbf7c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.111096 4982 generic.go:334] "Generic (PLEG): container finished" podID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerID="0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2" exitCode=0 Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.111160 4982 generic.go:334] "Generic (PLEG): container finished" podID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerID="65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f" exitCode=143 Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.111181 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.111231 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b23f583c-48c5-4805-accc-369d3adcbf7c","Type":"ContainerDied","Data":"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2"} Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.111290 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b23f583c-48c5-4805-accc-369d3adcbf7c","Type":"ContainerDied","Data":"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f"} Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.111308 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b23f583c-48c5-4805-accc-369d3adcbf7c","Type":"ContainerDied","Data":"5f9a21439c160d8df695be008ed30059cccb1082337ffd0a312782471a4c42a4"} Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.111332 4982 scope.go:117] "RemoveContainer" containerID="0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.116769 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9a378114-4034-4d3d-abb4-b188c1ba8983","Type":"ContainerDied","Data":"c0543606c266e49aeef6e53c59ef0c0ddd29a0a1809890e41197744be9b4d689"} Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.116878 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.166893 4982 scope.go:117] "RemoveContainer" containerID="65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.200458 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.213471 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.236848 4982 scope.go:117] "RemoveContainer" containerID="0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2" Jan 23 09:32:55 crc kubenswrapper[4982]: E0123 09:32:55.238948 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2\": container with ID starting with 0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2 not found: ID does not exist" containerID="0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.239006 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2"} err="failed to get container status \"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2\": rpc error: code = NotFound desc = could not find container \"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2\": container with ID starting with 0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2 not found: ID does not exist" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.239039 4982 scope.go:117] "RemoveContainer" containerID="65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f" Jan 23 09:32:55 crc kubenswrapper[4982]: E0123 09:32:55.239544 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f\": container with ID starting with 65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f not found: ID does not exist" containerID="65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.239575 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f"} err="failed to get container status \"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f\": rpc error: code = NotFound desc = could not find container \"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f\": container with ID starting with 65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f not found: ID does not exist" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.239595 4982 scope.go:117] "RemoveContainer" containerID="0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.239853 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2"} err="failed to get container status \"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2\": rpc error: code = NotFound desc = could not find container \"0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2\": container with ID starting with 0d7ec5a3dbd9d1ac27b6fb668c6198eb35ba849fc5579e6a11192f83cf0aa2a2 not found: ID does not exist" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.239878 4982 scope.go:117] "RemoveContainer" containerID="65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.240076 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f"} err="failed to get container status \"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f\": rpc error: code = NotFound desc = could not find container \"65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f\": container with ID starting with 65090f0c7632afff2345eefe2cc09f6810819236417a18b0e0266b11b0a81a8f not found: ID does not exist" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.240101 4982 scope.go:117] "RemoveContainer" containerID="ac1a36a5ed37b0a0aa899b3ebda3bb42d412dc38436ada99866d733bb080e509" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.243556 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.257361 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.278451 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: E0123 09:32:55.279184 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-httpd" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279205 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-httpd" Jan 23 09:32:55 crc kubenswrapper[4982]: E0123 09:32:55.279239 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-log" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279249 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-log" Jan 23 09:32:55 crc kubenswrapper[4982]: E0123 09:32:55.279268 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-httpd" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279276 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-httpd" Jan 23 09:32:55 crc kubenswrapper[4982]: E0123 09:32:55.279312 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-log" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279320 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-log" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279588 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-log" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279617 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-log" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279657 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" containerName="glance-httpd" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.279675 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" containerName="glance-httpd" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.281206 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.285321 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.286374 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.286567 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.286907 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6562b" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.295054 4982 scope.go:117] "RemoveContainer" containerID="d045fc3f4ff6c5a4b407e0f271e3a7645abeeea94fb16146aabe5b7dd9160839" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.295813 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-logs\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.295867 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-scripts\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.295898 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.295988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-config-data\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.296008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64kkm\" (UniqueName: \"kubernetes.io/projected/3986d8ac-65c3-4513-9369-83ea91215dac-kube-api-access-64kkm\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.296038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.296062 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.296136 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.318117 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.319942 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.323860 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.323865 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.332134 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398006 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398113 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xbt\" (UniqueName: \"kubernetes.io/projected/19167825-b7b1-4661-b978-4c63d9d13d12-kube-api-access-45xbt\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398165 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-config-data\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398192 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64kkm\" (UniqueName: \"kubernetes.io/projected/3986d8ac-65c3-4513-9369-83ea91215dac-kube-api-access-64kkm\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398230 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398260 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398309 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398337 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398389 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-logs\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398417 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398456 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398476 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398500 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-logs\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.398547 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-scripts\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.401231 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-logs\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.401490 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.403411 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.405445 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-scripts\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.405845 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-config-data\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.405861 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.457321 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64kkm\" (UniqueName: \"kubernetes.io/projected/3986d8ac-65c3-4513-9369-83ea91215dac-kube-api-access-64kkm\") pod \"glance-default-external-api-0\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.499571 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.499617 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.499670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-logs\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.499694 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.499720 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.499735 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.499802 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xbt\" (UniqueName: \"kubernetes.io/projected/19167825-b7b1-4661-b978-4c63d9d13d12-kube-api-access-45xbt\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.500274 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.500692 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-logs\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.503081 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.503461 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.504125 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.505639 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.518382 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xbt\" (UniqueName: \"kubernetes.io/projected/19167825-b7b1-4661-b978-4c63d9d13d12-kube-api-access-45xbt\") pod \"glance-default-internal-api-0\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.609684 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.639780 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.882138 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a378114-4034-4d3d-abb4-b188c1ba8983" path="/var/lib/kubelet/pods/9a378114-4034-4d3d-abb4-b188c1ba8983/volumes" Jan 23 09:32:55 crc kubenswrapper[4982]: I0123 09:32:55.883107 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23f583c-48c5-4805-accc-369d3adcbf7c" path="/var/lib/kubelet/pods/b23f583c-48c5-4805-accc-369d3adcbf7c/volumes" Jan 23 09:32:56 crc kubenswrapper[4982]: I0123 09:32:56.220933 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:32:56 crc kubenswrapper[4982]: W0123 09:32:56.227486 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19167825_b7b1_4661_b978_4c63d9d13d12.slice/crio-f0bd2298083b6255299633d26f48f3aca62efe52a25210acb4140f42a833c4ec WatchSource:0}: Error finding container f0bd2298083b6255299633d26f48f3aca62efe52a25210acb4140f42a833c4ec: Status 404 returned error can't find the container with id f0bd2298083b6255299633d26f48f3aca62efe52a25210acb4140f42a833c4ec Jan 23 09:32:56 crc kubenswrapper[4982]: I0123 09:32:56.325412 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:32:57 crc kubenswrapper[4982]: I0123 09:32:57.144222 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19167825-b7b1-4661-b978-4c63d9d13d12","Type":"ContainerStarted","Data":"0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e"} Jan 23 09:32:57 crc kubenswrapper[4982]: I0123 09:32:57.144564 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19167825-b7b1-4661-b978-4c63d9d13d12","Type":"ContainerStarted","Data":"f0bd2298083b6255299633d26f48f3aca62efe52a25210acb4140f42a833c4ec"} Jan 23 09:32:57 crc kubenswrapper[4982]: I0123 09:32:57.146807 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3986d8ac-65c3-4513-9369-83ea91215dac","Type":"ContainerStarted","Data":"17cef1f0cabf4e5278e702b1ac9f4319db73ebaf8713f6e2e8341dc092cb93bc"} Jan 23 09:32:57 crc kubenswrapper[4982]: I0123 09:32:57.146841 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3986d8ac-65c3-4513-9369-83ea91215dac","Type":"ContainerStarted","Data":"7b00f9d11fd38c9b556a0f5370d1b28709dae659006d5127605b307ffdcb2c43"} Jan 23 09:32:59 crc kubenswrapper[4982]: I0123 09:32:59.165309 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19167825-b7b1-4661-b978-4c63d9d13d12","Type":"ContainerStarted","Data":"c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3"} Jan 23 09:32:59 crc kubenswrapper[4982]: I0123 09:32:59.167185 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3986d8ac-65c3-4513-9369-83ea91215dac","Type":"ContainerStarted","Data":"8a65799603c27e31eb775be9e5ea32340fbe9357a118123727f2210c5987765e"} Jan 23 09:32:59 crc kubenswrapper[4982]: I0123 09:32:59.194342 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.194317444 podStartE2EDuration="4.194317444s" podCreationTimestamp="2026-01-23 09:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:59.185306719 +0000 UTC m=+5853.665670105" watchObservedRunningTime="2026-01-23 09:32:59.194317444 +0000 UTC m=+5853.674680830" Jan 23 09:32:59 crc kubenswrapper[4982]: I0123 09:32:59.211604 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.211578752 podStartE2EDuration="4.211578752s" podCreationTimestamp="2026-01-23 09:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:32:59.20669231 +0000 UTC m=+5853.687055696" watchObservedRunningTime="2026-01-23 09:32:59.211578752 +0000 UTC m=+5853.691942148" Jan 23 09:32:59 crc kubenswrapper[4982]: I0123 09:32:59.726786 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:32:59 crc kubenswrapper[4982]: I0123 09:32:59.796738 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-555f4b56f5-z4cxc"] Jan 23 09:32:59 crc kubenswrapper[4982]: I0123 09:32:59.797011 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" podUID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerName="dnsmasq-dns" containerID="cri-o://bbbcce015bc58d99e9f1c649e927cd33ac886b64701f736d5c5655ca6f5b6e29" gracePeriod=10 Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.193153 4982 generic.go:334] "Generic (PLEG): container finished" podID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerID="bbbcce015bc58d99e9f1c649e927cd33ac886b64701f736d5c5655ca6f5b6e29" exitCode=0 Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.193376 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" event={"ID":"78597e4e-a98c-4321-b6b8-fcd131d271e5","Type":"ContainerDied","Data":"bbbcce015bc58d99e9f1c649e927cd33ac886b64701f736d5c5655ca6f5b6e29"} Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.322817 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.511549 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-config\") pod \"78597e4e-a98c-4321-b6b8-fcd131d271e5\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.511643 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlrkv\" (UniqueName: \"kubernetes.io/projected/78597e4e-a98c-4321-b6b8-fcd131d271e5-kube-api-access-wlrkv\") pod \"78597e4e-a98c-4321-b6b8-fcd131d271e5\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.511676 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-nb\") pod \"78597e4e-a98c-4321-b6b8-fcd131d271e5\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.511759 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-sb\") pod \"78597e4e-a98c-4321-b6b8-fcd131d271e5\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.511786 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-dns-svc\") pod \"78597e4e-a98c-4321-b6b8-fcd131d271e5\" (UID: \"78597e4e-a98c-4321-b6b8-fcd131d271e5\") " Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.517379 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78597e4e-a98c-4321-b6b8-fcd131d271e5-kube-api-access-wlrkv" (OuterVolumeSpecName: "kube-api-access-wlrkv") pod "78597e4e-a98c-4321-b6b8-fcd131d271e5" (UID: "78597e4e-a98c-4321-b6b8-fcd131d271e5"). InnerVolumeSpecName "kube-api-access-wlrkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.564702 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78597e4e-a98c-4321-b6b8-fcd131d271e5" (UID: "78597e4e-a98c-4321-b6b8-fcd131d271e5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.564796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78597e4e-a98c-4321-b6b8-fcd131d271e5" (UID: "78597e4e-a98c-4321-b6b8-fcd131d271e5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.570784 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78597e4e-a98c-4321-b6b8-fcd131d271e5" (UID: "78597e4e-a98c-4321-b6b8-fcd131d271e5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.572191 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-config" (OuterVolumeSpecName: "config") pod "78597e4e-a98c-4321-b6b8-fcd131d271e5" (UID: "78597e4e-a98c-4321-b6b8-fcd131d271e5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.615060 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.615097 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlrkv\" (UniqueName: \"kubernetes.io/projected/78597e4e-a98c-4321-b6b8-fcd131d271e5-kube-api-access-wlrkv\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.615108 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.615117 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:00 crc kubenswrapper[4982]: I0123 09:33:00.615125 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78597e4e-a98c-4321-b6b8-fcd131d271e5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:01 crc kubenswrapper[4982]: I0123 09:33:01.205446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" event={"ID":"78597e4e-a98c-4321-b6b8-fcd131d271e5","Type":"ContainerDied","Data":"d04ea37a9e7dca3580af964d37523f59912b76c90a01890ecb2f7acbfe5e2438"} Jan 23 09:33:01 crc kubenswrapper[4982]: I0123 09:33:01.205506 4982 scope.go:117] "RemoveContainer" containerID="bbbcce015bc58d99e9f1c649e927cd33ac886b64701f736d5c5655ca6f5b6e29" Jan 23 09:33:01 crc kubenswrapper[4982]: I0123 09:33:01.205510 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-555f4b56f5-z4cxc" Jan 23 09:33:01 crc kubenswrapper[4982]: I0123 09:33:01.252004 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-555f4b56f5-z4cxc"] Jan 23 09:33:01 crc kubenswrapper[4982]: I0123 09:33:01.252059 4982 scope.go:117] "RemoveContainer" containerID="dce96a585256751a3285f1ca1e863a68c01c30f36362107e0b17900271210e6b" Jan 23 09:33:01 crc kubenswrapper[4982]: I0123 09:33:01.261902 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-555f4b56f5-z4cxc"] Jan 23 09:33:01 crc kubenswrapper[4982]: I0123 09:33:01.839502 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78597e4e-a98c-4321-b6b8-fcd131d271e5" path="/var/lib/kubelet/pods/78597e4e-a98c-4321-b6b8-fcd131d271e5/volumes" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.610750 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.611400 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.640466 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.640562 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.640814 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.654015 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.673385 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:05 crc kubenswrapper[4982]: I0123 09:33:05.689670 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:06 crc kubenswrapper[4982]: I0123 09:33:06.255244 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 09:33:06 crc kubenswrapper[4982]: I0123 09:33:06.255573 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 09:33:06 crc kubenswrapper[4982]: I0123 09:33:06.255587 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:06 crc kubenswrapper[4982]: I0123 09:33:06.255650 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:08 crc kubenswrapper[4982]: I0123 09:33:08.215407 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:08 crc kubenswrapper[4982]: I0123 09:33:08.219983 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 09:33:08 crc kubenswrapper[4982]: I0123 09:33:08.224812 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 09:33:08 crc kubenswrapper[4982]: I0123 09:33:08.244488 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 09:33:08 crc kubenswrapper[4982]: I0123 09:33:08.828174 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:33:08 crc kubenswrapper[4982]: E0123 09:33:08.828434 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.245030 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-l6l7x"] Jan 23 09:33:14 crc kubenswrapper[4982]: E0123 09:33:14.247507 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerName="dnsmasq-dns" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.247615 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerName="dnsmasq-dns" Jan 23 09:33:14 crc kubenswrapper[4982]: E0123 09:33:14.247859 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerName="init" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.247981 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerName="init" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.248428 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="78597e4e-a98c-4321-b6b8-fcd131d271e5" containerName="dnsmasq-dns" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.255530 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.258179 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l6l7x"] Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.270205 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-471c-account-create-update-jqct2"] Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.273233 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.275798 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.283588 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-471c-account-create-update-jqct2"] Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.401906 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-operator-scripts\") pod \"placement-db-create-l6l7x\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.402261 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nnxl\" (UniqueName: \"kubernetes.io/projected/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-kube-api-access-5nnxl\") pod \"placement-db-create-l6l7x\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.402291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzjsd\" (UniqueName: \"kubernetes.io/projected/60a09655-a300-46ad-8564-09230e4ea5b9-kube-api-access-nzjsd\") pod \"placement-471c-account-create-update-jqct2\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.402382 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60a09655-a300-46ad-8564-09230e4ea5b9-operator-scripts\") pod \"placement-471c-account-create-update-jqct2\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.503717 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-operator-scripts\") pod \"placement-db-create-l6l7x\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.503797 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nnxl\" (UniqueName: \"kubernetes.io/projected/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-kube-api-access-5nnxl\") pod \"placement-db-create-l6l7x\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.503828 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzjsd\" (UniqueName: \"kubernetes.io/projected/60a09655-a300-46ad-8564-09230e4ea5b9-kube-api-access-nzjsd\") pod \"placement-471c-account-create-update-jqct2\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.503910 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60a09655-a300-46ad-8564-09230e4ea5b9-operator-scripts\") pod \"placement-471c-account-create-update-jqct2\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.504681 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60a09655-a300-46ad-8564-09230e4ea5b9-operator-scripts\") pod \"placement-471c-account-create-update-jqct2\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.504765 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-operator-scripts\") pod \"placement-db-create-l6l7x\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.523115 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nnxl\" (UniqueName: \"kubernetes.io/projected/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-kube-api-access-5nnxl\") pod \"placement-db-create-l6l7x\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.530154 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzjsd\" (UniqueName: \"kubernetes.io/projected/60a09655-a300-46ad-8564-09230e4ea5b9-kube-api-access-nzjsd\") pod \"placement-471c-account-create-update-jqct2\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.574741 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:14 crc kubenswrapper[4982]: I0123 09:33:14.595125 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.086449 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-471c-account-create-update-jqct2"] Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.106650 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l6l7x"] Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.356353 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l6l7x" event={"ID":"fc2c64bd-230e-4c7c-b065-03ae7161d5b1","Type":"ContainerStarted","Data":"2f3bbb46511fb638b7c64856fc0e502397bb3767cc8bbd3f1a0cf17a9355bc91"} Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.356688 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l6l7x" event={"ID":"fc2c64bd-230e-4c7c-b065-03ae7161d5b1","Type":"ContainerStarted","Data":"f075d65d610c2b426f9d950879dd172fd8c875153a2ce2050852ab8060e72ca4"} Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.360534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-471c-account-create-update-jqct2" event={"ID":"60a09655-a300-46ad-8564-09230e4ea5b9","Type":"ContainerStarted","Data":"e3eb91263721654b3569a23212edc6c6425b1e9f11c1ba3b1dfaa78bb1ddb4b3"} Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.360576 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-471c-account-create-update-jqct2" event={"ID":"60a09655-a300-46ad-8564-09230e4ea5b9","Type":"ContainerStarted","Data":"c73bbce0c3fc8573296c238dc4e14a6b0c673f3111798e32c66da37895beb694"} Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.402646 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-471c-account-create-update-jqct2" podStartSLOduration=1.402615461 podStartE2EDuration="1.402615461s" podCreationTimestamp="2026-01-23 09:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:15.394577763 +0000 UTC m=+5869.874941149" watchObservedRunningTime="2026-01-23 09:33:15.402615461 +0000 UTC m=+5869.882978847" Jan 23 09:33:15 crc kubenswrapper[4982]: I0123 09:33:15.406260 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-l6l7x" podStartSLOduration=1.4062497 podStartE2EDuration="1.4062497s" podCreationTimestamp="2026-01-23 09:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:15.380056208 +0000 UTC m=+5869.860419594" watchObservedRunningTime="2026-01-23 09:33:15.4062497 +0000 UTC m=+5869.886613086" Jan 23 09:33:16 crc kubenswrapper[4982]: I0123 09:33:16.373261 4982 generic.go:334] "Generic (PLEG): container finished" podID="fc2c64bd-230e-4c7c-b065-03ae7161d5b1" containerID="2f3bbb46511fb638b7c64856fc0e502397bb3767cc8bbd3f1a0cf17a9355bc91" exitCode=0 Jan 23 09:33:16 crc kubenswrapper[4982]: I0123 09:33:16.373289 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l6l7x" event={"ID":"fc2c64bd-230e-4c7c-b065-03ae7161d5b1","Type":"ContainerDied","Data":"2f3bbb46511fb638b7c64856fc0e502397bb3767cc8bbd3f1a0cf17a9355bc91"} Jan 23 09:33:16 crc kubenswrapper[4982]: I0123 09:33:16.375977 4982 generic.go:334] "Generic (PLEG): container finished" podID="60a09655-a300-46ad-8564-09230e4ea5b9" containerID="e3eb91263721654b3569a23212edc6c6425b1e9f11c1ba3b1dfaa78bb1ddb4b3" exitCode=0 Jan 23 09:33:16 crc kubenswrapper[4982]: I0123 09:33:16.376013 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-471c-account-create-update-jqct2" event={"ID":"60a09655-a300-46ad-8564-09230e4ea5b9","Type":"ContainerDied","Data":"e3eb91263721654b3569a23212edc6c6425b1e9f11c1ba3b1dfaa78bb1ddb4b3"} Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.843877 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.852005 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.976097 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nnxl\" (UniqueName: \"kubernetes.io/projected/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-kube-api-access-5nnxl\") pod \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.976184 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-operator-scripts\") pod \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\" (UID: \"fc2c64bd-230e-4c7c-b065-03ae7161d5b1\") " Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.976390 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzjsd\" (UniqueName: \"kubernetes.io/projected/60a09655-a300-46ad-8564-09230e4ea5b9-kube-api-access-nzjsd\") pod \"60a09655-a300-46ad-8564-09230e4ea5b9\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.976520 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60a09655-a300-46ad-8564-09230e4ea5b9-operator-scripts\") pod \"60a09655-a300-46ad-8564-09230e4ea5b9\" (UID: \"60a09655-a300-46ad-8564-09230e4ea5b9\") " Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.978429 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc2c64bd-230e-4c7c-b065-03ae7161d5b1" (UID: "fc2c64bd-230e-4c7c-b065-03ae7161d5b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.979220 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a09655-a300-46ad-8564-09230e4ea5b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60a09655-a300-46ad-8564-09230e4ea5b9" (UID: "60a09655-a300-46ad-8564-09230e4ea5b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.996435 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-kube-api-access-5nnxl" (OuterVolumeSpecName: "kube-api-access-5nnxl") pod "fc2c64bd-230e-4c7c-b065-03ae7161d5b1" (UID: "fc2c64bd-230e-4c7c-b065-03ae7161d5b1"). InnerVolumeSpecName "kube-api-access-5nnxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:17 crc kubenswrapper[4982]: I0123 09:33:17.996502 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a09655-a300-46ad-8564-09230e4ea5b9-kube-api-access-nzjsd" (OuterVolumeSpecName: "kube-api-access-nzjsd") pod "60a09655-a300-46ad-8564-09230e4ea5b9" (UID: "60a09655-a300-46ad-8564-09230e4ea5b9"). InnerVolumeSpecName "kube-api-access-nzjsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.079266 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzjsd\" (UniqueName: \"kubernetes.io/projected/60a09655-a300-46ad-8564-09230e4ea5b9-kube-api-access-nzjsd\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.079326 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60a09655-a300-46ad-8564-09230e4ea5b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.079338 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nnxl\" (UniqueName: \"kubernetes.io/projected/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-kube-api-access-5nnxl\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.079349 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc2c64bd-230e-4c7c-b065-03ae7161d5b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.395295 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l6l7x" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.395282 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l6l7x" event={"ID":"fc2c64bd-230e-4c7c-b065-03ae7161d5b1","Type":"ContainerDied","Data":"f075d65d610c2b426f9d950879dd172fd8c875153a2ce2050852ab8060e72ca4"} Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.395663 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f075d65d610c2b426f9d950879dd172fd8c875153a2ce2050852ab8060e72ca4" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.402989 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-471c-account-create-update-jqct2" event={"ID":"60a09655-a300-46ad-8564-09230e4ea5b9","Type":"ContainerDied","Data":"c73bbce0c3fc8573296c238dc4e14a6b0c673f3111798e32c66da37895beb694"} Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.403024 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c73bbce0c3fc8573296c238dc4e14a6b0c673f3111798e32c66da37895beb694" Jan 23 09:33:18 crc kubenswrapper[4982]: I0123 09:33:18.403089 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-471c-account-create-update-jqct2" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.681757 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rtc2n"] Jan 23 09:33:19 crc kubenswrapper[4982]: E0123 09:33:19.682520 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2c64bd-230e-4c7c-b065-03ae7161d5b1" containerName="mariadb-database-create" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.682538 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2c64bd-230e-4c7c-b065-03ae7161d5b1" containerName="mariadb-database-create" Jan 23 09:33:19 crc kubenswrapper[4982]: E0123 09:33:19.682588 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a09655-a300-46ad-8564-09230e4ea5b9" containerName="mariadb-account-create-update" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.682598 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a09655-a300-46ad-8564-09230e4ea5b9" containerName="mariadb-account-create-update" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.682864 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a09655-a300-46ad-8564-09230e4ea5b9" containerName="mariadb-account-create-update" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.682899 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2c64bd-230e-4c7c-b065-03ae7161d5b1" containerName="mariadb-database-create" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.683671 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.685588 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.686188 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.686230 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6bvmc" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.705389 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rtc2n"] Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.766513 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8df46d887-g2k4g"] Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.774390 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.783451 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8df46d887-g2k4g"] Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.810211 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-combined-ca-bundle\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.810302 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-scripts\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.810344 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1be5403b-2787-4030-905d-26349736b5b4-logs\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.810465 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp87v\" (UniqueName: \"kubernetes.io/projected/1be5403b-2787-4030-905d-26349736b5b4-kube-api-access-rp87v\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.810496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-config-data\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.912601 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1be5403b-2787-4030-905d-26349736b5b4-logs\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.912664 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-dns-svc\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.912694 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hxd\" (UniqueName: \"kubernetes.io/projected/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-kube-api-access-b2hxd\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.912987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp87v\" (UniqueName: \"kubernetes.io/projected/1be5403b-2787-4030-905d-26349736b5b4-kube-api-access-rp87v\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.913111 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-config-data\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.913159 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1be5403b-2787-4030-905d-26349736b5b4-logs\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.913208 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-config\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.913241 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-combined-ca-bundle\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.913293 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-sb\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.913328 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-nb\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.913451 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-scripts\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.918517 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-config-data\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.919645 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-scripts\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.922170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-combined-ca-bundle\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:19 crc kubenswrapper[4982]: I0123 09:33:19.933121 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp87v\" (UniqueName: \"kubernetes.io/projected/1be5403b-2787-4030-905d-26349736b5b4-kube-api-access-rp87v\") pod \"placement-db-sync-rtc2n\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.016782 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-dns-svc\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.016883 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2hxd\" (UniqueName: \"kubernetes.io/projected/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-kube-api-access-b2hxd\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.017075 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-config\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.017104 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-sb\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.017125 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-nb\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.018159 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-config\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.018164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-sb\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.018981 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-nb\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.019178 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-dns-svc\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.034361 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2hxd\" (UniqueName: \"kubernetes.io/projected/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-kube-api-access-b2hxd\") pod \"dnsmasq-dns-8df46d887-g2k4g\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.044799 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.124375 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.532715 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rtc2n"] Jan 23 09:33:20 crc kubenswrapper[4982]: W0123 09:33:20.533850 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1be5403b_2787_4030_905d_26349736b5b4.slice/crio-94abf34d885a43ee4245525804addecd700b323fcd71ddaf986158b5a1f9f144 WatchSource:0}: Error finding container 94abf34d885a43ee4245525804addecd700b323fcd71ddaf986158b5a1f9f144: Status 404 returned error can't find the container with id 94abf34d885a43ee4245525804addecd700b323fcd71ddaf986158b5a1f9f144 Jan 23 09:33:20 crc kubenswrapper[4982]: I0123 09:33:20.681884 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8df46d887-g2k4g"] Jan 23 09:33:20 crc kubenswrapper[4982]: W0123 09:33:20.685562 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc4dfa1c_6e80_4c38_acc2_72eefdf4c2d8.slice/crio-371c31808d9744569b1d9922126fe68aeaddf82e5f3630b971dc0a197cdb910a WatchSource:0}: Error finding container 371c31808d9744569b1d9922126fe68aeaddf82e5f3630b971dc0a197cdb910a: Status 404 returned error can't find the container with id 371c31808d9744569b1d9922126fe68aeaddf82e5f3630b971dc0a197cdb910a Jan 23 09:33:21 crc kubenswrapper[4982]: I0123 09:33:21.434531 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerID="35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3" exitCode=0 Jan 23 09:33:21 crc kubenswrapper[4982]: I0123 09:33:21.434897 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" event={"ID":"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8","Type":"ContainerDied","Data":"35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3"} Jan 23 09:33:21 crc kubenswrapper[4982]: I0123 09:33:21.434930 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" event={"ID":"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8","Type":"ContainerStarted","Data":"371c31808d9744569b1d9922126fe68aeaddf82e5f3630b971dc0a197cdb910a"} Jan 23 09:33:21 crc kubenswrapper[4982]: I0123 09:33:21.450976 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rtc2n" event={"ID":"1be5403b-2787-4030-905d-26349736b5b4","Type":"ContainerStarted","Data":"f7ce3d1bdf0bf6230ba8dcaa5f0be7062c9b5aa0c9ab499e68892f23f44bcff7"} Jan 23 09:33:21 crc kubenswrapper[4982]: I0123 09:33:21.451023 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rtc2n" event={"ID":"1be5403b-2787-4030-905d-26349736b5b4","Type":"ContainerStarted","Data":"94abf34d885a43ee4245525804addecd700b323fcd71ddaf986158b5a1f9f144"} Jan 23 09:33:21 crc kubenswrapper[4982]: I0123 09:33:21.493559 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rtc2n" podStartSLOduration=2.493529904 podStartE2EDuration="2.493529904s" podCreationTimestamp="2026-01-23 09:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:21.487927472 +0000 UTC m=+5875.968290858" watchObservedRunningTime="2026-01-23 09:33:21.493529904 +0000 UTC m=+5875.973893300" Jan 23 09:33:22 crc kubenswrapper[4982]: I0123 09:33:22.467702 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" event={"ID":"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8","Type":"ContainerStarted","Data":"c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e"} Jan 23 09:33:22 crc kubenswrapper[4982]: I0123 09:33:22.468277 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:22 crc kubenswrapper[4982]: I0123 09:33:22.497737 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" podStartSLOduration=3.497712649 podStartE2EDuration="3.497712649s" podCreationTimestamp="2026-01-23 09:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:22.492910578 +0000 UTC m=+5876.973273974" watchObservedRunningTime="2026-01-23 09:33:22.497712649 +0000 UTC m=+5876.978076045" Jan 23 09:33:22 crc kubenswrapper[4982]: I0123 09:33:22.827766 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:33:22 crc kubenswrapper[4982]: E0123 09:33:22.828331 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:33:23 crc kubenswrapper[4982]: I0123 09:33:23.478029 4982 generic.go:334] "Generic (PLEG): container finished" podID="1be5403b-2787-4030-905d-26349736b5b4" containerID="f7ce3d1bdf0bf6230ba8dcaa5f0be7062c9b5aa0c9ab499e68892f23f44bcff7" exitCode=0 Jan 23 09:33:23 crc kubenswrapper[4982]: I0123 09:33:23.478113 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rtc2n" event={"ID":"1be5403b-2787-4030-905d-26349736b5b4","Type":"ContainerDied","Data":"f7ce3d1bdf0bf6230ba8dcaa5f0be7062c9b5aa0c9ab499e68892f23f44bcff7"} Jan 23 09:33:24 crc kubenswrapper[4982]: I0123 09:33:24.854787 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.030066 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-config-data\") pod \"1be5403b-2787-4030-905d-26349736b5b4\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.030181 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp87v\" (UniqueName: \"kubernetes.io/projected/1be5403b-2787-4030-905d-26349736b5b4-kube-api-access-rp87v\") pod \"1be5403b-2787-4030-905d-26349736b5b4\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.030250 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-scripts\") pod \"1be5403b-2787-4030-905d-26349736b5b4\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.030302 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1be5403b-2787-4030-905d-26349736b5b4-logs\") pod \"1be5403b-2787-4030-905d-26349736b5b4\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.030468 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-combined-ca-bundle\") pod \"1be5403b-2787-4030-905d-26349736b5b4\" (UID: \"1be5403b-2787-4030-905d-26349736b5b4\") " Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.031059 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be5403b-2787-4030-905d-26349736b5b4-logs" (OuterVolumeSpecName: "logs") pod "1be5403b-2787-4030-905d-26349736b5b4" (UID: "1be5403b-2787-4030-905d-26349736b5b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.035804 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-scripts" (OuterVolumeSpecName: "scripts") pod "1be5403b-2787-4030-905d-26349736b5b4" (UID: "1be5403b-2787-4030-905d-26349736b5b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.036043 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be5403b-2787-4030-905d-26349736b5b4-kube-api-access-rp87v" (OuterVolumeSpecName: "kube-api-access-rp87v") pod "1be5403b-2787-4030-905d-26349736b5b4" (UID: "1be5403b-2787-4030-905d-26349736b5b4"). InnerVolumeSpecName "kube-api-access-rp87v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.058749 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-config-data" (OuterVolumeSpecName: "config-data") pod "1be5403b-2787-4030-905d-26349736b5b4" (UID: "1be5403b-2787-4030-905d-26349736b5b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.059961 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1be5403b-2787-4030-905d-26349736b5b4" (UID: "1be5403b-2787-4030-905d-26349736b5b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.133317 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.133357 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.133368 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp87v\" (UniqueName: \"kubernetes.io/projected/1be5403b-2787-4030-905d-26349736b5b4-kube-api-access-rp87v\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.133379 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1be5403b-2787-4030-905d-26349736b5b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.133388 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1be5403b-2787-4030-905d-26349736b5b4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.498534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rtc2n" event={"ID":"1be5403b-2787-4030-905d-26349736b5b4","Type":"ContainerDied","Data":"94abf34d885a43ee4245525804addecd700b323fcd71ddaf986158b5a1f9f144"} Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.498587 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94abf34d885a43ee4245525804addecd700b323fcd71ddaf986158b5a1f9f144" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.498856 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rtc2n" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.584561 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-567fd77856-sgz5r"] Jan 23 09:33:25 crc kubenswrapper[4982]: E0123 09:33:25.585020 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be5403b-2787-4030-905d-26349736b5b4" containerName="placement-db-sync" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.585042 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be5403b-2787-4030-905d-26349736b5b4" containerName="placement-db-sync" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.585280 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be5403b-2787-4030-905d-26349736b5b4" containerName="placement-db-sync" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.586426 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.591854 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.592322 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.592704 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.593058 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.596474 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-6bvmc" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.608872 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-567fd77856-sgz5r"] Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.756924 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-public-tls-certs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.757301 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-internal-tls-certs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.757397 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq9qc\" (UniqueName: \"kubernetes.io/projected/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-kube-api-access-mq9qc\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.757424 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-config-data\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.757461 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-scripts\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.757483 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-combined-ca-bundle\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.757602 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-logs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.859453 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-internal-tls-certs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.859532 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq9qc\" (UniqueName: \"kubernetes.io/projected/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-kube-api-access-mq9qc\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.859568 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-config-data\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.859598 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-scripts\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.859641 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-combined-ca-bundle\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.859708 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-logs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.859813 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-public-tls-certs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.860216 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-logs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.863286 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-internal-tls-certs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.863372 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-config-data\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.866158 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-combined-ca-bundle\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.869552 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-public-tls-certs\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.880944 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-scripts\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.881527 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq9qc\" (UniqueName: \"kubernetes.io/projected/9fcf73c8-de5e-40df-9d9a-f2f237e71a8c-kube-api-access-mq9qc\") pod \"placement-567fd77856-sgz5r\" (UID: \"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c\") " pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:25 crc kubenswrapper[4982]: I0123 09:33:25.909164 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:26 crc kubenswrapper[4982]: I0123 09:33:26.483376 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-567fd77856-sgz5r"] Jan 23 09:33:26 crc kubenswrapper[4982]: I0123 09:33:26.508736 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-567fd77856-sgz5r" event={"ID":"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c","Type":"ContainerStarted","Data":"5b47708906b3226e66535a52a5878ace480b0513836f9c0863f37583f92236dc"} Jan 23 09:33:27 crc kubenswrapper[4982]: I0123 09:33:27.517882 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-567fd77856-sgz5r" event={"ID":"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c","Type":"ContainerStarted","Data":"c9fe7bca82677beae3e55d6f9f344f8437a790be1ce8b614f55a611de317db73"} Jan 23 09:33:27 crc kubenswrapper[4982]: I0123 09:33:27.518250 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:27 crc kubenswrapper[4982]: I0123 09:33:27.518262 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-567fd77856-sgz5r" event={"ID":"9fcf73c8-de5e-40df-9d9a-f2f237e71a8c","Type":"ContainerStarted","Data":"473530700a24caea4fb28bd745b85fd6df7744ee4c96122f0b4f3bd435d25c09"} Jan 23 09:33:27 crc kubenswrapper[4982]: I0123 09:33:27.546809 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-567fd77856-sgz5r" podStartSLOduration=2.546782724 podStartE2EDuration="2.546782724s" podCreationTimestamp="2026-01-23 09:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:33:27.534357817 +0000 UTC m=+5882.014721203" watchObservedRunningTime="2026-01-23 09:33:27.546782724 +0000 UTC m=+5882.027146110" Jan 23 09:33:28 crc kubenswrapper[4982]: I0123 09:33:28.528663 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.126782 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.192909 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8944c655-7dw6p"] Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.195272 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" podUID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerName="dnsmasq-dns" containerID="cri-o://fdaccac52864b213eaac8e1ab4b07a36f5d543ee6ae50c304f92963a45b20e32" gracePeriod=10 Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.568958 4982 generic.go:334] "Generic (PLEG): container finished" podID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerID="fdaccac52864b213eaac8e1ab4b07a36f5d543ee6ae50c304f92963a45b20e32" exitCode=0 Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.569076 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" event={"ID":"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670","Type":"ContainerDied","Data":"fdaccac52864b213eaac8e1ab4b07a36f5d543ee6ae50c304f92963a45b20e32"} Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.679532 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.717550 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-nb\") pod \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.717601 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8bwm\" (UniqueName: \"kubernetes.io/projected/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-kube-api-access-t8bwm\") pod \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.717695 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-dns-svc\") pod \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.717744 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-config\") pod \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.717779 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-sb\") pod \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\" (UID: \"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670\") " Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.725983 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-kube-api-access-t8bwm" (OuterVolumeSpecName: "kube-api-access-t8bwm") pod "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" (UID: "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670"). InnerVolumeSpecName "kube-api-access-t8bwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.790774 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" (UID: "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.802229 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" (UID: "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.813703 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-config" (OuterVolumeSpecName: "config") pod "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" (UID: "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.820580 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.820654 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.820663 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.820674 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8bwm\" (UniqueName: \"kubernetes.io/projected/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-kube-api-access-t8bwm\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.838166 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" (UID: "a7a8f26d-16d5-4a92-90a6-0c1d3e7db670"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:33:30 crc kubenswrapper[4982]: I0123 09:33:30.925483 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:31 crc kubenswrapper[4982]: I0123 09:33:31.579467 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" event={"ID":"a7a8f26d-16d5-4a92-90a6-0c1d3e7db670","Type":"ContainerDied","Data":"a2b75843f260cd75e62971edb242d3eca81a4a2e906a4be5cd9c6387f6db274f"} Jan 23 09:33:31 crc kubenswrapper[4982]: I0123 09:33:31.579523 4982 scope.go:117] "RemoveContainer" containerID="fdaccac52864b213eaac8e1ab4b07a36f5d543ee6ae50c304f92963a45b20e32" Jan 23 09:33:31 crc kubenswrapper[4982]: I0123 09:33:31.579540 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8944c655-7dw6p" Jan 23 09:33:31 crc kubenswrapper[4982]: I0123 09:33:31.598126 4982 scope.go:117] "RemoveContainer" containerID="587b6a9ef5393a6edbdba6f7fdcedc6c91dc0db18a6d8641be1ad9395bfc1426" Jan 23 09:33:31 crc kubenswrapper[4982]: I0123 09:33:31.619301 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8944c655-7dw6p"] Jan 23 09:33:31 crc kubenswrapper[4982]: I0123 09:33:31.631219 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d8944c655-7dw6p"] Jan 23 09:33:31 crc kubenswrapper[4982]: I0123 09:33:31.839769 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" path="/var/lib/kubelet/pods/a7a8f26d-16d5-4a92-90a6-0c1d3e7db670/volumes" Jan 23 09:33:37 crc kubenswrapper[4982]: I0123 09:33:37.828290 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:33:37 crc kubenswrapper[4982]: E0123 09:33:37.847757 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:33:48 crc kubenswrapper[4982]: I0123 09:33:48.828101 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:33:48 crc kubenswrapper[4982]: E0123 09:33:48.828727 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:33:56 crc kubenswrapper[4982]: I0123 09:33:56.918835 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:56 crc kubenswrapper[4982]: I0123 09:33:56.928655 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-567fd77856-sgz5r" Jan 23 09:33:59 crc kubenswrapper[4982]: I0123 09:33:59.827819 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:33:59 crc kubenswrapper[4982]: E0123 09:33:59.828531 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:34:14 crc kubenswrapper[4982]: I0123 09:34:14.827067 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:34:14 crc kubenswrapper[4982]: E0123 09:34:14.827882 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.546639 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-8zsl9"] Jan 23 09:34:20 crc kubenswrapper[4982]: E0123 09:34:20.549152 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerName="dnsmasq-dns" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.549264 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerName="dnsmasq-dns" Jan 23 09:34:20 crc kubenswrapper[4982]: E0123 09:34:20.549347 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerName="init" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.549405 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerName="init" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.549722 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7a8f26d-16d5-4a92-90a6-0c1d3e7db670" containerName="dnsmasq-dns" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.550552 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.558048 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8zsl9"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.641152 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wgjvd"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.643144 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.651999 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d92b-account-create-update-lx2gt"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.653337 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6be0584-affd-47f3-8606-83e94c22c855-operator-scripts\") pod \"nova-api-db-create-8zsl9\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.653460 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zvs\" (UniqueName: \"kubernetes.io/projected/d6be0584-affd-47f3-8606-83e94c22c855-kube-api-access-c6zvs\") pod \"nova-api-db-create-8zsl9\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.653723 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.657812 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.666135 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d92b-account-create-update-lx2gt"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.678258 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wgjvd"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.748303 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-b6886"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.749760 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.755858 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40479474-e29f-4fd9-9880-47c78e8d3081-operator-scripts\") pod \"nova-cell0-db-create-wgjvd\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.755956 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vhj\" (UniqueName: \"kubernetes.io/projected/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-kube-api-access-r9vhj\") pod \"nova-api-d92b-account-create-update-lx2gt\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.756024 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zvs\" (UniqueName: \"kubernetes.io/projected/d6be0584-affd-47f3-8606-83e94c22c855-kube-api-access-c6zvs\") pod \"nova-api-db-create-8zsl9\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.756130 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-operator-scripts\") pod \"nova-api-d92b-account-create-update-lx2gt\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.756163 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrh2b\" (UniqueName: \"kubernetes.io/projected/40479474-e29f-4fd9-9880-47c78e8d3081-kube-api-access-jrh2b\") pod \"nova-cell0-db-create-wgjvd\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.756278 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6be0584-affd-47f3-8606-83e94c22c855-operator-scripts\") pod \"nova-api-db-create-8zsl9\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.756956 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6be0584-affd-47f3-8606-83e94c22c855-operator-scripts\") pod \"nova-api-db-create-8zsl9\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.761350 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b6886"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.781444 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zvs\" (UniqueName: \"kubernetes.io/projected/d6be0584-affd-47f3-8606-83e94c22c855-kube-api-access-c6zvs\") pod \"nova-api-db-create-8zsl9\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.846336 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ba3d-account-create-update-w79nx"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.847530 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.853230 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.859351 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40479474-e29f-4fd9-9880-47c78e8d3081-operator-scripts\") pod \"nova-cell0-db-create-wgjvd\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.859454 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9vhj\" (UniqueName: \"kubernetes.io/projected/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-kube-api-access-r9vhj\") pod \"nova-api-d92b-account-create-update-lx2gt\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.859568 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ffef2a-10fa-4d0c-9b74-41acda0a4121-operator-scripts\") pod \"nova-cell1-db-create-b6886\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.859639 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-operator-scripts\") pod \"nova-api-d92b-account-create-update-lx2gt\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.859681 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrh2b\" (UniqueName: \"kubernetes.io/projected/40479474-e29f-4fd9-9880-47c78e8d3081-kube-api-access-jrh2b\") pod \"nova-cell0-db-create-wgjvd\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.859774 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2mt\" (UniqueName: \"kubernetes.io/projected/47ffef2a-10fa-4d0c-9b74-41acda0a4121-kube-api-access-4z2mt\") pod \"nova-cell1-db-create-b6886\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.860447 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40479474-e29f-4fd9-9880-47c78e8d3081-operator-scripts\") pod \"nova-cell0-db-create-wgjvd\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.860463 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-operator-scripts\") pod \"nova-api-d92b-account-create-update-lx2gt\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.863246 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ba3d-account-create-update-w79nx"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.868568 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.879246 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9vhj\" (UniqueName: \"kubernetes.io/projected/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-kube-api-access-r9vhj\") pod \"nova-api-d92b-account-create-update-lx2gt\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.892209 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrh2b\" (UniqueName: \"kubernetes.io/projected/40479474-e29f-4fd9-9880-47c78e8d3081-kube-api-access-jrh2b\") pod \"nova-cell0-db-create-wgjvd\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.962252 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ffef2a-10fa-4d0c-9b74-41acda0a4121-operator-scripts\") pod \"nova-cell1-db-create-b6886\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.962313 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dj2\" (UniqueName: \"kubernetes.io/projected/ccac9163-42c2-4c27-b714-f4aab14e3a12-kube-api-access-p6dj2\") pod \"nova-cell0-ba3d-account-create-update-w79nx\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.962394 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccac9163-42c2-4c27-b714-f4aab14e3a12-operator-scripts\") pod \"nova-cell0-ba3d-account-create-update-w79nx\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.962427 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2mt\" (UniqueName: \"kubernetes.io/projected/47ffef2a-10fa-4d0c-9b74-41acda0a4121-kube-api-access-4z2mt\") pod \"nova-cell1-db-create-b6886\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.963381 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ffef2a-10fa-4d0c-9b74-41acda0a4121-operator-scripts\") pod \"nova-cell1-db-create-b6886\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.964396 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d846-account-create-update-rhmzn"] Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.965695 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.967618 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.968477 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.972506 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.987608 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2mt\" (UniqueName: \"kubernetes.io/projected/47ffef2a-10fa-4d0c-9b74-41acda0a4121-kube-api-access-4z2mt\") pod \"nova-cell1-db-create-b6886\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:20 crc kubenswrapper[4982]: I0123 09:34:20.990661 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d846-account-create-update-rhmzn"] Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.064531 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8075af5-7e6e-46b3-b08d-065cead6efb8-operator-scripts\") pod \"nova-cell1-d846-account-create-update-rhmzn\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.064755 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-929bg\" (UniqueName: \"kubernetes.io/projected/c8075af5-7e6e-46b3-b08d-065cead6efb8-kube-api-access-929bg\") pod \"nova-cell1-d846-account-create-update-rhmzn\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.064881 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dj2\" (UniqueName: \"kubernetes.io/projected/ccac9163-42c2-4c27-b714-f4aab14e3a12-kube-api-access-p6dj2\") pod \"nova-cell0-ba3d-account-create-update-w79nx\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.065047 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccac9163-42c2-4c27-b714-f4aab14e3a12-operator-scripts\") pod \"nova-cell0-ba3d-account-create-update-w79nx\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.066370 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccac9163-42c2-4c27-b714-f4aab14e3a12-operator-scripts\") pod \"nova-cell0-ba3d-account-create-update-w79nx\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.066602 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.093958 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dj2\" (UniqueName: \"kubernetes.io/projected/ccac9163-42c2-4c27-b714-f4aab14e3a12-kube-api-access-p6dj2\") pod \"nova-cell0-ba3d-account-create-update-w79nx\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.095759 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.171258 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8075af5-7e6e-46b3-b08d-065cead6efb8-operator-scripts\") pod \"nova-cell1-d846-account-create-update-rhmzn\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.171326 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-929bg\" (UniqueName: \"kubernetes.io/projected/c8075af5-7e6e-46b3-b08d-065cead6efb8-kube-api-access-929bg\") pod \"nova-cell1-d846-account-create-update-rhmzn\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.172173 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8075af5-7e6e-46b3-b08d-065cead6efb8-operator-scripts\") pod \"nova-cell1-d846-account-create-update-rhmzn\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.189781 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-929bg\" (UniqueName: \"kubernetes.io/projected/c8075af5-7e6e-46b3-b08d-065cead6efb8-kube-api-access-929bg\") pod \"nova-cell1-d846-account-create-update-rhmzn\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.415714 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.425954 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-8zsl9"] Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.556280 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wgjvd"] Jan 23 09:34:21 crc kubenswrapper[4982]: W0123 09:34:21.562322 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40479474_e29f_4fd9_9880_47c78e8d3081.slice/crio-f6a8b9d44f91b3618e46f8d700233a313aa4990049284d5bd544bc35373ad3ae WatchSource:0}: Error finding container f6a8b9d44f91b3618e46f8d700233a313aa4990049284d5bd544bc35373ad3ae: Status 404 returned error can't find the container with id f6a8b9d44f91b3618e46f8d700233a313aa4990049284d5bd544bc35373ad3ae Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.647741 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d92b-account-create-update-lx2gt"] Jan 23 09:34:21 crc kubenswrapper[4982]: W0123 09:34:21.651120 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ce6f0af_af9b_44a0_8e98_188afaa9fa75.slice/crio-c0466aa91fb1e209e4e1a8181364e20940635365d4fecdb78dc3f90614bcea94 WatchSource:0}: Error finding container c0466aa91fb1e209e4e1a8181364e20940635365d4fecdb78dc3f90614bcea94: Status 404 returned error can't find the container with id c0466aa91fb1e209e4e1a8181364e20940635365d4fecdb78dc3f90614bcea94 Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.712741 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ba3d-account-create-update-w79nx"] Jan 23 09:34:21 crc kubenswrapper[4982]: W0123 09:34:21.717767 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccac9163_42c2_4c27_b714_f4aab14e3a12.slice/crio-1a529d954b724f9891d355dcd5731d3efc686a29b4e57ec8f903521d0e1dc9f8 WatchSource:0}: Error finding container 1a529d954b724f9891d355dcd5731d3efc686a29b4e57ec8f903521d0e1dc9f8: Status 404 returned error can't find the container with id 1a529d954b724f9891d355dcd5731d3efc686a29b4e57ec8f903521d0e1dc9f8 Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.729156 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b6886"] Jan 23 09:34:21 crc kubenswrapper[4982]: I0123 09:34:21.938167 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d846-account-create-update-rhmzn"] Jan 23 09:34:21 crc kubenswrapper[4982]: W0123 09:34:21.962990 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8075af5_7e6e_46b3_b08d_065cead6efb8.slice/crio-979f1d9239b51d2f9347e45652a8dd12214b18e0a037bac37a06aa5426b9d807 WatchSource:0}: Error finding container 979f1d9239b51d2f9347e45652a8dd12214b18e0a037bac37a06aa5426b9d807: Status 404 returned error can't find the container with id 979f1d9239b51d2f9347e45652a8dd12214b18e0a037bac37a06aa5426b9d807 Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.100304 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" event={"ID":"ccac9163-42c2-4c27-b714-f4aab14e3a12","Type":"ContainerStarted","Data":"1a529d954b724f9891d355dcd5731d3efc686a29b4e57ec8f903521d0e1dc9f8"} Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.103163 4982 generic.go:334] "Generic (PLEG): container finished" podID="40479474-e29f-4fd9-9880-47c78e8d3081" containerID="3d70546f66f70b7063a47bbdb9c93860e2696f9f3f6ec45e1545982617cbaa53" exitCode=0 Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.103246 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjvd" event={"ID":"40479474-e29f-4fd9-9880-47c78e8d3081","Type":"ContainerDied","Data":"3d70546f66f70b7063a47bbdb9c93860e2696f9f3f6ec45e1545982617cbaa53"} Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.103264 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjvd" event={"ID":"40479474-e29f-4fd9-9880-47c78e8d3081","Type":"ContainerStarted","Data":"f6a8b9d44f91b3618e46f8d700233a313aa4990049284d5bd544bc35373ad3ae"} Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.107118 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" event={"ID":"c8075af5-7e6e-46b3-b08d-065cead6efb8","Type":"ContainerStarted","Data":"979f1d9239b51d2f9347e45652a8dd12214b18e0a037bac37a06aa5426b9d807"} Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.119806 4982 generic.go:334] "Generic (PLEG): container finished" podID="d6be0584-affd-47f3-8606-83e94c22c855" containerID="9788b1a222a7ceed2550fdc3c4f23cc671d48b0acc4e9b4dde463ced562d9aed" exitCode=0 Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.119918 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8zsl9" event={"ID":"d6be0584-affd-47f3-8606-83e94c22c855","Type":"ContainerDied","Data":"9788b1a222a7ceed2550fdc3c4f23cc671d48b0acc4e9b4dde463ced562d9aed"} Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.119964 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8zsl9" event={"ID":"d6be0584-affd-47f3-8606-83e94c22c855","Type":"ContainerStarted","Data":"47c54878db292b1d212360b454b37531a9d9cd97e4e9f473ba516318ac2f5187"} Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.128695 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d92b-account-create-update-lx2gt" event={"ID":"0ce6f0af-af9b-44a0-8e98-188afaa9fa75","Type":"ContainerStarted","Data":"c0466aa91fb1e209e4e1a8181364e20940635365d4fecdb78dc3f90614bcea94"} Jan 23 09:34:22 crc kubenswrapper[4982]: I0123 09:34:22.140815 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b6886" event={"ID":"47ffef2a-10fa-4d0c-9b74-41acda0a4121","Type":"ContainerStarted","Data":"619102124af27a605cf0add6980e0b687a82439c5dd1fc418cd2def371dbcf86"} Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.152354 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d92b-account-create-update-lx2gt" event={"ID":"0ce6f0af-af9b-44a0-8e98-188afaa9fa75","Type":"ContainerStarted","Data":"70191a2199a49a9b59309d2e343d3776daaaaf7fe9436ae3a615f27818036ffb"} Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.155163 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b6886" event={"ID":"47ffef2a-10fa-4d0c-9b74-41acda0a4121","Type":"ContainerStarted","Data":"c8c20b1cbc6e8c0f850a40fd3b81f0ca162d7f8f31b5f93f36ad9fb1228130f1"} Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.158152 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" event={"ID":"ccac9163-42c2-4c27-b714-f4aab14e3a12","Type":"ContainerStarted","Data":"44e18a8c87e38e549763c2196f5235726f0735f9ae00c0994fb85e31cfed3354"} Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.159829 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" event={"ID":"c8075af5-7e6e-46b3-b08d-065cead6efb8","Type":"ContainerStarted","Data":"087df3345fdcfb198ce149d139b843a6767571de54a2a55f68c098979797d455"} Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.185686 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-d92b-account-create-update-lx2gt" podStartSLOduration=3.185661521 podStartE2EDuration="3.185661521s" podCreationTimestamp="2026-01-23 09:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:23.169719758 +0000 UTC m=+5937.650083164" watchObservedRunningTime="2026-01-23 09:34:23.185661521 +0000 UTC m=+5937.666024907" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.192837 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" podStartSLOduration=3.192816606 podStartE2EDuration="3.192816606s" podCreationTimestamp="2026-01-23 09:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:23.183568665 +0000 UTC m=+5937.663932051" watchObservedRunningTime="2026-01-23 09:34:23.192816606 +0000 UTC m=+5937.673179982" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.205267 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-b6886" podStartSLOduration=3.205243843 podStartE2EDuration="3.205243843s" podCreationTimestamp="2026-01-23 09:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:23.198419138 +0000 UTC m=+5937.678782524" watchObservedRunningTime="2026-01-23 09:34:23.205243843 +0000 UTC m=+5937.685607229" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.224664 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" podStartSLOduration=3.22464314 podStartE2EDuration="3.22464314s" podCreationTimestamp="2026-01-23 09:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:23.213075216 +0000 UTC m=+5937.693438602" watchObservedRunningTime="2026-01-23 09:34:23.22464314 +0000 UTC m=+5937.705006526" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.662869 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.671697 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.829203 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6be0584-affd-47f3-8606-83e94c22c855-operator-scripts\") pod \"d6be0584-affd-47f3-8606-83e94c22c855\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.829396 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zvs\" (UniqueName: \"kubernetes.io/projected/d6be0584-affd-47f3-8606-83e94c22c855-kube-api-access-c6zvs\") pod \"d6be0584-affd-47f3-8606-83e94c22c855\" (UID: \"d6be0584-affd-47f3-8606-83e94c22c855\") " Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.829578 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrh2b\" (UniqueName: \"kubernetes.io/projected/40479474-e29f-4fd9-9880-47c78e8d3081-kube-api-access-jrh2b\") pod \"40479474-e29f-4fd9-9880-47c78e8d3081\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.829672 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40479474-e29f-4fd9-9880-47c78e8d3081-operator-scripts\") pod \"40479474-e29f-4fd9-9880-47c78e8d3081\" (UID: \"40479474-e29f-4fd9-9880-47c78e8d3081\") " Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.833392 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6be0584-affd-47f3-8606-83e94c22c855-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6be0584-affd-47f3-8606-83e94c22c855" (UID: "d6be0584-affd-47f3-8606-83e94c22c855"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.833956 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40479474-e29f-4fd9-9880-47c78e8d3081-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40479474-e29f-4fd9-9880-47c78e8d3081" (UID: "40479474-e29f-4fd9-9880-47c78e8d3081"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.838089 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6be0584-affd-47f3-8606-83e94c22c855-kube-api-access-c6zvs" (OuterVolumeSpecName: "kube-api-access-c6zvs") pod "d6be0584-affd-47f3-8606-83e94c22c855" (UID: "d6be0584-affd-47f3-8606-83e94c22c855"). InnerVolumeSpecName "kube-api-access-c6zvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.838675 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40479474-e29f-4fd9-9880-47c78e8d3081-kube-api-access-jrh2b" (OuterVolumeSpecName: "kube-api-access-jrh2b") pod "40479474-e29f-4fd9-9880-47c78e8d3081" (UID: "40479474-e29f-4fd9-9880-47c78e8d3081"). InnerVolumeSpecName "kube-api-access-jrh2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.841507 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrh2b\" (UniqueName: \"kubernetes.io/projected/40479474-e29f-4fd9-9880-47c78e8d3081-kube-api-access-jrh2b\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.841559 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40479474-e29f-4fd9-9880-47c78e8d3081-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.841570 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6be0584-affd-47f3-8606-83e94c22c855-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:23 crc kubenswrapper[4982]: I0123 09:34:23.841580 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zvs\" (UniqueName: \"kubernetes.io/projected/d6be0584-affd-47f3-8606-83e94c22c855-kube-api-access-c6zvs\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.180525 4982 generic.go:334] "Generic (PLEG): container finished" podID="47ffef2a-10fa-4d0c-9b74-41acda0a4121" containerID="c8c20b1cbc6e8c0f850a40fd3b81f0ca162d7f8f31b5f93f36ad9fb1228130f1" exitCode=0 Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.180648 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b6886" event={"ID":"47ffef2a-10fa-4d0c-9b74-41acda0a4121","Type":"ContainerDied","Data":"c8c20b1cbc6e8c0f850a40fd3b81f0ca162d7f8f31b5f93f36ad9fb1228130f1"} Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.184529 4982 generic.go:334] "Generic (PLEG): container finished" podID="ccac9163-42c2-4c27-b714-f4aab14e3a12" containerID="44e18a8c87e38e549763c2196f5235726f0735f9ae00c0994fb85e31cfed3354" exitCode=0 Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.184672 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" event={"ID":"ccac9163-42c2-4c27-b714-f4aab14e3a12","Type":"ContainerDied","Data":"44e18a8c87e38e549763c2196f5235726f0735f9ae00c0994fb85e31cfed3354"} Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.186167 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjvd" event={"ID":"40479474-e29f-4fd9-9880-47c78e8d3081","Type":"ContainerDied","Data":"f6a8b9d44f91b3618e46f8d700233a313aa4990049284d5bd544bc35373ad3ae"} Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.186204 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a8b9d44f91b3618e46f8d700233a313aa4990049284d5bd544bc35373ad3ae" Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.186275 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjvd" Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.188278 4982 generic.go:334] "Generic (PLEG): container finished" podID="c8075af5-7e6e-46b3-b08d-065cead6efb8" containerID="087df3345fdcfb198ce149d139b843a6767571de54a2a55f68c098979797d455" exitCode=0 Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.188362 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" event={"ID":"c8075af5-7e6e-46b3-b08d-065cead6efb8","Type":"ContainerDied","Data":"087df3345fdcfb198ce149d139b843a6767571de54a2a55f68c098979797d455"} Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.190652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-8zsl9" event={"ID":"d6be0584-affd-47f3-8606-83e94c22c855","Type":"ContainerDied","Data":"47c54878db292b1d212360b454b37531a9d9cd97e4e9f473ba516318ac2f5187"} Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.190700 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47c54878db292b1d212360b454b37531a9d9cd97e4e9f473ba516318ac2f5187" Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.190854 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-8zsl9" Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.192824 4982 generic.go:334] "Generic (PLEG): container finished" podID="0ce6f0af-af9b-44a0-8e98-188afaa9fa75" containerID="70191a2199a49a9b59309d2e343d3776daaaaf7fe9436ae3a615f27818036ffb" exitCode=0 Jan 23 09:34:24 crc kubenswrapper[4982]: I0123 09:34:24.192853 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d92b-account-create-update-lx2gt" event={"ID":"0ce6f0af-af9b-44a0-8e98-188afaa9fa75","Type":"ContainerDied","Data":"70191a2199a49a9b59309d2e343d3776daaaaf7fe9436ae3a615f27818036ffb"} Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.592457 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.680015 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccac9163-42c2-4c27-b714-f4aab14e3a12-operator-scripts\") pod \"ccac9163-42c2-4c27-b714-f4aab14e3a12\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.680145 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6dj2\" (UniqueName: \"kubernetes.io/projected/ccac9163-42c2-4c27-b714-f4aab14e3a12-kube-api-access-p6dj2\") pod \"ccac9163-42c2-4c27-b714-f4aab14e3a12\" (UID: \"ccac9163-42c2-4c27-b714-f4aab14e3a12\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.680971 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccac9163-42c2-4c27-b714-f4aab14e3a12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccac9163-42c2-4c27-b714-f4aab14e3a12" (UID: "ccac9163-42c2-4c27-b714-f4aab14e3a12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.685851 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccac9163-42c2-4c27-b714-f4aab14e3a12-kube-api-access-p6dj2" (OuterVolumeSpecName: "kube-api-access-p6dj2") pod "ccac9163-42c2-4c27-b714-f4aab14e3a12" (UID: "ccac9163-42c2-4c27-b714-f4aab14e3a12"). InnerVolumeSpecName "kube-api-access-p6dj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.779078 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.782853 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccac9163-42c2-4c27-b714-f4aab14e3a12-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.782910 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6dj2\" (UniqueName: \"kubernetes.io/projected/ccac9163-42c2-4c27-b714-f4aab14e3a12-kube-api-access-p6dj2\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.795708 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.818964 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.884859 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z2mt\" (UniqueName: \"kubernetes.io/projected/47ffef2a-10fa-4d0c-9b74-41acda0a4121-kube-api-access-4z2mt\") pod \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.884991 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-929bg\" (UniqueName: \"kubernetes.io/projected/c8075af5-7e6e-46b3-b08d-065cead6efb8-kube-api-access-929bg\") pod \"c8075af5-7e6e-46b3-b08d-065cead6efb8\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.885052 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-operator-scripts\") pod \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.885086 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ffef2a-10fa-4d0c-9b74-41acda0a4121-operator-scripts\") pod \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\" (UID: \"47ffef2a-10fa-4d0c-9b74-41acda0a4121\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.885176 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8075af5-7e6e-46b3-b08d-065cead6efb8-operator-scripts\") pod \"c8075af5-7e6e-46b3-b08d-065cead6efb8\" (UID: \"c8075af5-7e6e-46b3-b08d-065cead6efb8\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.885208 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9vhj\" (UniqueName: \"kubernetes.io/projected/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-kube-api-access-r9vhj\") pod \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\" (UID: \"0ce6f0af-af9b-44a0-8e98-188afaa9fa75\") " Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.885915 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ce6f0af-af9b-44a0-8e98-188afaa9fa75" (UID: "0ce6f0af-af9b-44a0-8e98-188afaa9fa75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.885927 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47ffef2a-10fa-4d0c-9b74-41acda0a4121-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47ffef2a-10fa-4d0c-9b74-41acda0a4121" (UID: "47ffef2a-10fa-4d0c-9b74-41acda0a4121"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.886039 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8075af5-7e6e-46b3-b08d-065cead6efb8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8075af5-7e6e-46b3-b08d-065cead6efb8" (UID: "c8075af5-7e6e-46b3-b08d-065cead6efb8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.889003 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ffef2a-10fa-4d0c-9b74-41acda0a4121-kube-api-access-4z2mt" (OuterVolumeSpecName: "kube-api-access-4z2mt") pod "47ffef2a-10fa-4d0c-9b74-41acda0a4121" (UID: "47ffef2a-10fa-4d0c-9b74-41acda0a4121"). InnerVolumeSpecName "kube-api-access-4z2mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.890477 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8075af5-7e6e-46b3-b08d-065cead6efb8-kube-api-access-929bg" (OuterVolumeSpecName: "kube-api-access-929bg") pod "c8075af5-7e6e-46b3-b08d-065cead6efb8" (UID: "c8075af5-7e6e-46b3-b08d-065cead6efb8"). InnerVolumeSpecName "kube-api-access-929bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.896183 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-kube-api-access-r9vhj" (OuterVolumeSpecName: "kube-api-access-r9vhj") pod "0ce6f0af-af9b-44a0-8e98-188afaa9fa75" (UID: "0ce6f0af-af9b-44a0-8e98-188afaa9fa75"). InnerVolumeSpecName "kube-api-access-r9vhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.988727 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z2mt\" (UniqueName: \"kubernetes.io/projected/47ffef2a-10fa-4d0c-9b74-41acda0a4121-kube-api-access-4z2mt\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.988786 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-929bg\" (UniqueName: \"kubernetes.io/projected/c8075af5-7e6e-46b3-b08d-065cead6efb8-kube-api-access-929bg\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.989065 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.989407 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47ffef2a-10fa-4d0c-9b74-41acda0a4121-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.989464 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8075af5-7e6e-46b3-b08d-065cead6efb8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:25 crc kubenswrapper[4982]: I0123 09:34:25.989480 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9vhj\" (UniqueName: \"kubernetes.io/projected/0ce6f0af-af9b-44a0-8e98-188afaa9fa75-kube-api-access-r9vhj\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.220691 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b6886" event={"ID":"47ffef2a-10fa-4d0c-9b74-41acda0a4121","Type":"ContainerDied","Data":"619102124af27a605cf0add6980e0b687a82439c5dd1fc418cd2def371dbcf86"} Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.220750 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="619102124af27a605cf0add6980e0b687a82439c5dd1fc418cd2def371dbcf86" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.220774 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b6886" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.223428 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" event={"ID":"ccac9163-42c2-4c27-b714-f4aab14e3a12","Type":"ContainerDied","Data":"1a529d954b724f9891d355dcd5731d3efc686a29b4e57ec8f903521d0e1dc9f8"} Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.223506 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a529d954b724f9891d355dcd5731d3efc686a29b4e57ec8f903521d0e1dc9f8" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.223615 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ba3d-account-create-update-w79nx" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.227346 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.227339 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d846-account-create-update-rhmzn" event={"ID":"c8075af5-7e6e-46b3-b08d-065cead6efb8","Type":"ContainerDied","Data":"979f1d9239b51d2f9347e45652a8dd12214b18e0a037bac37a06aa5426b9d807"} Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.227803 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979f1d9239b51d2f9347e45652a8dd12214b18e0a037bac37a06aa5426b9d807" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.236891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d92b-account-create-update-lx2gt" event={"ID":"0ce6f0af-af9b-44a0-8e98-188afaa9fa75","Type":"ContainerDied","Data":"c0466aa91fb1e209e4e1a8181364e20940635365d4fecdb78dc3f90614bcea94"} Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.236928 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d92b-account-create-update-lx2gt" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.236952 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0466aa91fb1e209e4e1a8181364e20940635365d4fecdb78dc3f90614bcea94" Jan 23 09:34:26 crc kubenswrapper[4982]: I0123 09:34:26.828152 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:34:26 crc kubenswrapper[4982]: E0123 09:34:26.828798 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.040434 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s4rj"] Jan 23 09:34:31 crc kubenswrapper[4982]: E0123 09:34:31.042224 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6be0584-affd-47f3-8606-83e94c22c855" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042255 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6be0584-affd-47f3-8606-83e94c22c855" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: E0123 09:34:31.042277 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce6f0af-af9b-44a0-8e98-188afaa9fa75" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042286 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce6f0af-af9b-44a0-8e98-188afaa9fa75" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: E0123 09:34:31.042301 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40479474-e29f-4fd9-9880-47c78e8d3081" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042309 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="40479474-e29f-4fd9-9880-47c78e8d3081" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: E0123 09:34:31.042327 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8075af5-7e6e-46b3-b08d-065cead6efb8" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042335 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8075af5-7e6e-46b3-b08d-065cead6efb8" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: E0123 09:34:31.042349 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ffef2a-10fa-4d0c-9b74-41acda0a4121" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042357 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ffef2a-10fa-4d0c-9b74-41acda0a4121" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: E0123 09:34:31.042406 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccac9163-42c2-4c27-b714-f4aab14e3a12" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042416 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccac9163-42c2-4c27-b714-f4aab14e3a12" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042752 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="40479474-e29f-4fd9-9880-47c78e8d3081" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042780 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce6f0af-af9b-44a0-8e98-188afaa9fa75" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042794 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccac9163-42c2-4c27-b714-f4aab14e3a12" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042809 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6be0584-affd-47f3-8606-83e94c22c855" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042824 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8075af5-7e6e-46b3-b08d-065cead6efb8" containerName="mariadb-account-create-update" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.042842 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ffef2a-10fa-4d0c-9b74-41acda0a4121" containerName="mariadb-database-create" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.044116 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.046602 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.046807 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sc6vd" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.047984 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.054028 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s4rj"] Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.101110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.101488 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-scripts\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.101565 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvc5f\" (UniqueName: \"kubernetes.io/projected/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-kube-api-access-xvc5f\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.101879 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-config-data\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.206841 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-config-data\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.206957 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.206990 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-scripts\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.207038 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvc5f\" (UniqueName: \"kubernetes.io/projected/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-kube-api-access-xvc5f\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.221203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-scripts\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.232538 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-config-data\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.233367 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.261770 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvc5f\" (UniqueName: \"kubernetes.io/projected/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-kube-api-access-xvc5f\") pod \"nova-cell0-conductor-db-sync-4s4rj\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.393197 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:31 crc kubenswrapper[4982]: I0123 09:34:31.878003 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s4rj"] Jan 23 09:34:32 crc kubenswrapper[4982]: I0123 09:34:32.309316 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" event={"ID":"bb634be2-4ee1-47fb-aa55-8f6416c1aba8","Type":"ContainerStarted","Data":"880ce0a4c73b6fe9e982d827e2d02ff676011187834d8eb261179a12b14b3763"} Jan 23 09:34:32 crc kubenswrapper[4982]: I0123 09:34:32.309660 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" event={"ID":"bb634be2-4ee1-47fb-aa55-8f6416c1aba8","Type":"ContainerStarted","Data":"37142fcfe33eeadd990e5a82f252a6143b15e1b5d3ff0e7e718f0fd62d127731"} Jan 23 09:34:32 crc kubenswrapper[4982]: I0123 09:34:32.337634 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" podStartSLOduration=1.337577273 podStartE2EDuration="1.337577273s" podCreationTimestamp="2026-01-23 09:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:32.323398318 +0000 UTC m=+5946.803761704" watchObservedRunningTime="2026-01-23 09:34:32.337577273 +0000 UTC m=+5946.817940659" Jan 23 09:34:39 crc kubenswrapper[4982]: I0123 09:34:39.374237 4982 generic.go:334] "Generic (PLEG): container finished" podID="bb634be2-4ee1-47fb-aa55-8f6416c1aba8" containerID="880ce0a4c73b6fe9e982d827e2d02ff676011187834d8eb261179a12b14b3763" exitCode=0 Jan 23 09:34:39 crc kubenswrapper[4982]: I0123 09:34:39.374345 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" event={"ID":"bb634be2-4ee1-47fb-aa55-8f6416c1aba8","Type":"ContainerDied","Data":"880ce0a4c73b6fe9e982d827e2d02ff676011187834d8eb261179a12b14b3763"} Jan 23 09:34:39 crc kubenswrapper[4982]: I0123 09:34:39.827216 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:34:39 crc kubenswrapper[4982]: E0123 09:34:39.827814 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:34:40 crc kubenswrapper[4982]: I0123 09:34:40.820096 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.019813 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-combined-ca-bundle\") pod \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.020219 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-config-data\") pod \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.020321 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-scripts\") pod \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.020467 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvc5f\" (UniqueName: \"kubernetes.io/projected/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-kube-api-access-xvc5f\") pod \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\" (UID: \"bb634be2-4ee1-47fb-aa55-8f6416c1aba8\") " Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.025568 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-kube-api-access-xvc5f" (OuterVolumeSpecName: "kube-api-access-xvc5f") pod "bb634be2-4ee1-47fb-aa55-8f6416c1aba8" (UID: "bb634be2-4ee1-47fb-aa55-8f6416c1aba8"). InnerVolumeSpecName "kube-api-access-xvc5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.025986 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-scripts" (OuterVolumeSpecName: "scripts") pod "bb634be2-4ee1-47fb-aa55-8f6416c1aba8" (UID: "bb634be2-4ee1-47fb-aa55-8f6416c1aba8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.049903 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-config-data" (OuterVolumeSpecName: "config-data") pod "bb634be2-4ee1-47fb-aa55-8f6416c1aba8" (UID: "bb634be2-4ee1-47fb-aa55-8f6416c1aba8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.062245 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb634be2-4ee1-47fb-aa55-8f6416c1aba8" (UID: "bb634be2-4ee1-47fb-aa55-8f6416c1aba8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.122803 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.122843 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.122854 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.122866 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvc5f\" (UniqueName: \"kubernetes.io/projected/bb634be2-4ee1-47fb-aa55-8f6416c1aba8-kube-api-access-xvc5f\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.414300 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" event={"ID":"bb634be2-4ee1-47fb-aa55-8f6416c1aba8","Type":"ContainerDied","Data":"37142fcfe33eeadd990e5a82f252a6143b15e1b5d3ff0e7e718f0fd62d127731"} Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.414346 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37142fcfe33eeadd990e5a82f252a6143b15e1b5d3ff0e7e718f0fd62d127731" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.414345 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4s4rj" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.481825 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 09:34:41 crc kubenswrapper[4982]: E0123 09:34:41.482374 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb634be2-4ee1-47fb-aa55-8f6416c1aba8" containerName="nova-cell0-conductor-db-sync" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.482399 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb634be2-4ee1-47fb-aa55-8f6416c1aba8" containerName="nova-cell0-conductor-db-sync" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.482674 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb634be2-4ee1-47fb-aa55-8f6416c1aba8" containerName="nova-cell0-conductor-db-sync" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.483431 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.485869 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sc6vd" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.487739 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.496012 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.631659 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7cd99d-7e67-413e-997c-beec4f6b2624-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.631735 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7cd99d-7e67-413e-997c-beec4f6b2624-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.632235 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5x68\" (UniqueName: \"kubernetes.io/projected/df7cd99d-7e67-413e-997c-beec4f6b2624-kube-api-access-s5x68\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.735909 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5x68\" (UniqueName: \"kubernetes.io/projected/df7cd99d-7e67-413e-997c-beec4f6b2624-kube-api-access-s5x68\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.736058 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7cd99d-7e67-413e-997c-beec4f6b2624-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.736126 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7cd99d-7e67-413e-997c-beec4f6b2624-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.742379 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df7cd99d-7e67-413e-997c-beec4f6b2624-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.742491 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df7cd99d-7e67-413e-997c-beec4f6b2624-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.759136 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5x68\" (UniqueName: \"kubernetes.io/projected/df7cd99d-7e67-413e-997c-beec4f6b2624-kube-api-access-s5x68\") pod \"nova-cell0-conductor-0\" (UID: \"df7cd99d-7e67-413e-997c-beec4f6b2624\") " pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:41 crc kubenswrapper[4982]: I0123 09:34:41.799101 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:42 crc kubenswrapper[4982]: I0123 09:34:42.251337 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 09:34:42 crc kubenswrapper[4982]: I0123 09:34:42.422976 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"df7cd99d-7e67-413e-997c-beec4f6b2624","Type":"ContainerStarted","Data":"06d6ead0c1328e0156222a8b35e7a867018eae521f615b124296110e9da8f647"} Jan 23 09:34:43 crc kubenswrapper[4982]: I0123 09:34:43.432804 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"df7cd99d-7e67-413e-997c-beec4f6b2624","Type":"ContainerStarted","Data":"8a6a31eebd5c23a8a0015fcc60d358d22a54c3f3fa181b80628550f3d159f90a"} Jan 23 09:34:43 crc kubenswrapper[4982]: I0123 09:34:43.433133 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:43 crc kubenswrapper[4982]: I0123 09:34:43.455690 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.455645968 podStartE2EDuration="2.455645968s" podCreationTimestamp="2026-01-23 09:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:43.454008264 +0000 UTC m=+5957.934371650" watchObservedRunningTime="2026-01-23 09:34:43.455645968 +0000 UTC m=+5957.936009374" Jan 23 09:34:50 crc kubenswrapper[4982]: I0123 09:34:50.828243 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:34:50 crc kubenswrapper[4982]: E0123 09:34:50.829056 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:34:51 crc kubenswrapper[4982]: I0123 09:34:51.825939 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.398510 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zp2tp"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.400655 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.402543 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.402770 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.427040 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zp2tp"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.563328 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-scripts\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.563426 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6sf8\" (UniqueName: \"kubernetes.io/projected/7c5f2d4c-e155-4920-bce0-17b75d698fae-kube-api-access-j6sf8\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.563506 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-config-data\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.563577 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.638029 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.645913 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.646058 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.651858 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.665509 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-config-data\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.665581 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.665807 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-scripts\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.665858 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6sf8\" (UniqueName: \"kubernetes.io/projected/7c5f2d4c-e155-4920-bce0-17b75d698fae-kube-api-access-j6sf8\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.672871 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.679443 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.680388 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-scripts\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.681979 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.685267 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-config-data\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.686409 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.705985 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.724903 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.726833 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.730365 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.731097 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6sf8\" (UniqueName: \"kubernetes.io/projected/7c5f2d4c-e155-4920-bce0-17b75d698fae-kube-api-access-j6sf8\") pod \"nova-cell0-cell-mapping-zp2tp\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.741906 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.770178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86l26\" (UniqueName: \"kubernetes.io/projected/9ce14684-fad6-4869-bf34-935ea9637b04-kube-api-access-86l26\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.770240 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.770287 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-config-data\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.770304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce14684-fad6-4869-bf34-935ea9637b04-logs\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.806890 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.808326 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.813520 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.856358 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879314 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-config-data\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879375 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879409 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwn5\" (UniqueName: \"kubernetes.io/projected/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-kube-api-access-7mwn5\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879461 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86l26\" (UniqueName: \"kubernetes.io/projected/9ce14684-fad6-4869-bf34-935ea9637b04-kube-api-access-86l26\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879517 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879546 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffd8g\" (UniqueName: \"kubernetes.io/projected/106a0070-ce0a-4126-859c-64fff5dee001-kube-api-access-ffd8g\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879574 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879609 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-config-data\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879640 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce14684-fad6-4869-bf34-935ea9637b04-logs\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879682 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/106a0070-ce0a-4126-859c-64fff5dee001-logs\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.879711 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.883163 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce14684-fad6-4869-bf34-935ea9637b04-logs\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.887245 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-config-data\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.887327 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55b588dccf-tc4xx"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.888212 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.888919 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.907203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86l26\" (UniqueName: \"kubernetes.io/projected/9ce14684-fad6-4869-bf34-935ea9637b04-kube-api-access-86l26\") pod \"nova-api-0\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " pod="openstack/nova-api-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.920191 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b588dccf-tc4xx"] Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982108 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfg7z\" (UniqueName: \"kubernetes.io/projected/02c3dd55-c179-4b2f-acfd-97f8150bba0f-kube-api-access-rfg7z\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982186 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-config-data\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982231 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982263 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwn5\" (UniqueName: \"kubernetes.io/projected/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-kube-api-access-7mwn5\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982381 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffd8g\" (UniqueName: \"kubernetes.io/projected/106a0070-ce0a-4126-859c-64fff5dee001-kube-api-access-ffd8g\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982415 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982464 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982548 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/106a0070-ce0a-4126-859c-64fff5dee001-logs\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982576 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-config-data\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.982612 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.986989 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.989084 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/106a0070-ce0a-4126-859c-64fff5dee001-logs\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.992098 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:52 crc kubenswrapper[4982]: I0123 09:34:52.996412 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-config-data\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.003891 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.005517 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.005646 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwn5\" (UniqueName: \"kubernetes.io/projected/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-kube-api-access-7mwn5\") pod \"nova-cell1-novncproxy-0\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.024388 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.032343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffd8g\" (UniqueName: \"kubernetes.io/projected/106a0070-ce0a-4126-859c-64fff5dee001-kube-api-access-ffd8g\") pod \"nova-metadata-0\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.083880 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-sb\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.083942 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-config-data\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.084112 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-config\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.084222 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfg7z\" (UniqueName: \"kubernetes.io/projected/02c3dd55-c179-4b2f-acfd-97f8150bba0f-kube-api-access-rfg7z\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.084295 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27njj\" (UniqueName: \"kubernetes.io/projected/9d700614-24d2-4917-bfba-1fd08e4e62c6-kube-api-access-27njj\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.084644 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-dns-svc\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.084665 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.084741 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-nb\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.090233 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-config-data\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.106718 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.108734 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfg7z\" (UniqueName: \"kubernetes.io/projected/02c3dd55-c179-4b2f-acfd-97f8150bba0f-kube-api-access-rfg7z\") pod \"nova-scheduler-0\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " pod="openstack/nova-scheduler-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.134142 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.172802 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.186041 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-dns-svc\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.186110 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-nb\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.186132 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-sb\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.186173 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-config\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.186215 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27njj\" (UniqueName: \"kubernetes.io/projected/9d700614-24d2-4917-bfba-1fd08e4e62c6-kube-api-access-27njj\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.187442 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-dns-svc\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.188059 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-nb\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.188755 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-sb\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.188863 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-config\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.192832 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.215453 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27njj\" (UniqueName: \"kubernetes.io/projected/9d700614-24d2-4917-bfba-1fd08e4e62c6-kube-api-access-27njj\") pod \"dnsmasq-dns-55b588dccf-tc4xx\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.221741 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.367379 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.580914 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.624102 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce14684-fad6-4869-bf34-935ea9637b04","Type":"ContainerStarted","Data":"7527f5aeaec37f5760f89db33dc423c5882131e865754cd4fce3f3b189008863"} Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.669791 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zp2tp"] Jan 23 09:34:53 crc kubenswrapper[4982]: W0123 09:34:53.673866 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c5f2d4c_e155_4920_bce0_17b75d698fae.slice/crio-3222636d24c99f4b7f68838e372dbe157bcc55319488fa8cf6e83057130229c9 WatchSource:0}: Error finding container 3222636d24c99f4b7f68838e372dbe157bcc55319488fa8cf6e83057130229c9: Status 404 returned error can't find the container with id 3222636d24c99f4b7f68838e372dbe157bcc55319488fa8cf6e83057130229c9 Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.755268 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q42jr"] Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.756791 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.761189 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.761362 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.784662 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q42jr"] Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.856702 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-config-data\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.856899 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq9f8\" (UniqueName: \"kubernetes.io/projected/ac556d5b-d326-4af7-b237-cdecbc452ce9-kube-api-access-gq9f8\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.857204 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-scripts\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.857466 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.940650 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.955116 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.959026 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-config-data\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.959091 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq9f8\" (UniqueName: \"kubernetes.io/projected/ac556d5b-d326-4af7-b237-cdecbc452ce9-kube-api-access-gq9f8\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.959190 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-scripts\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.959263 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.974438 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-config-data\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.974515 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.974639 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-scripts\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:53 crc kubenswrapper[4982]: I0123 09:34:53.981547 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq9f8\" (UniqueName: \"kubernetes.io/projected/ac556d5b-d326-4af7-b237-cdecbc452ce9-kube-api-access-gq9f8\") pod \"nova-cell1-conductor-db-sync-q42jr\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.104611 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b588dccf-tc4xx"] Jan 23 09:34:54 crc kubenswrapper[4982]: W0123 09:34:54.119922 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d700614_24d2_4917_bfba_1fd08e4e62c6.slice/crio-e0e777f97b9ad7336ebda96f912493326d61fc3cf50b2e23048d7445fb89d71e WatchSource:0}: Error finding container e0e777f97b9ad7336ebda96f912493326d61fc3cf50b2e23048d7445fb89d71e: Status 404 returned error can't find the container with id e0e777f97b9ad7336ebda96f912493326d61fc3cf50b2e23048d7445fb89d71e Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.132828 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.674689 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02c3dd55-c179-4b2f-acfd-97f8150bba0f","Type":"ContainerStarted","Data":"07256719b9e57c70f2664360d43994f6b6e4aca95099e2b0afabd7d1a167f4f3"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.675221 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02c3dd55-c179-4b2f-acfd-97f8150bba0f","Type":"ContainerStarted","Data":"a9876cfb9cd3b76a208456671a3b548ac57fc22cb4ad71aa3bcdf29ff09ebd7d"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.700329 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8","Type":"ContainerStarted","Data":"90c08413908ba442d9ec1a69270ee2c39da0354f4763b91763a46904084bb3f8"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.700379 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8","Type":"ContainerStarted","Data":"af0e237f1ddd5dc035926be506f06dca7c5402db99f1ab9e40d8c0d7d843bd09"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.711843 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.711823914 podStartE2EDuration="2.711823914s" podCreationTimestamp="2026-01-23 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:54.709024098 +0000 UTC m=+5969.189387484" watchObservedRunningTime="2026-01-23 09:34:54.711823914 +0000 UTC m=+5969.192187300" Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.727963 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" event={"ID":"9d700614-24d2-4917-bfba-1fd08e4e62c6","Type":"ContainerStarted","Data":"e0e777f97b9ad7336ebda96f912493326d61fc3cf50b2e23048d7445fb89d71e"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.749317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce14684-fad6-4869-bf34-935ea9637b04","Type":"ContainerStarted","Data":"7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.749374 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce14684-fad6-4869-bf34-935ea9637b04","Type":"ContainerStarted","Data":"cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.752128 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"106a0070-ce0a-4126-859c-64fff5dee001","Type":"ContainerStarted","Data":"bc737ba8e930f8916c982a8d6d1504d9bb815ded2e4257843971c9a97795188c"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.752150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"106a0070-ce0a-4126-859c-64fff5dee001","Type":"ContainerStarted","Data":"4f0bd6446c2e09578acf7590e7a8ead2e77cbd78e6dd5bcde5ff2d51a43f95af"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.752159 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"106a0070-ce0a-4126-859c-64fff5dee001","Type":"ContainerStarted","Data":"52e1564d9ac424d25e6114316c57cdaad382956ba7f60bae7ae66c6acb35303b"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.767269 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zp2tp" event={"ID":"7c5f2d4c-e155-4920-bce0-17b75d698fae","Type":"ContainerStarted","Data":"d2c6d4d10787121beff78454c0f54383da5e697a04b7d9e1ae30f1c35fa80dd2"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.767317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zp2tp" event={"ID":"7c5f2d4c-e155-4920-bce0-17b75d698fae","Type":"ContainerStarted","Data":"3222636d24c99f4b7f68838e372dbe157bcc55319488fa8cf6e83057130229c9"} Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.786845 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q42jr"] Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.801077 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.801059768 podStartE2EDuration="2.801059768s" podCreationTimestamp="2026-01-23 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:54.747183955 +0000 UTC m=+5969.227547341" watchObservedRunningTime="2026-01-23 09:34:54.801059768 +0000 UTC m=+5969.281423154" Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.857070 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.857049849 podStartE2EDuration="2.857049849s" podCreationTimestamp="2026-01-23 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:54.789675979 +0000 UTC m=+5969.270039375" watchObservedRunningTime="2026-01-23 09:34:54.857049849 +0000 UTC m=+5969.337413235" Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.884566 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.884542736 podStartE2EDuration="2.884542736s" podCreationTimestamp="2026-01-23 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:54.870301729 +0000 UTC m=+5969.350665115" watchObservedRunningTime="2026-01-23 09:34:54.884542736 +0000 UTC m=+5969.364906122" Jan 23 09:34:54 crc kubenswrapper[4982]: I0123 09:34:54.917446 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zp2tp" podStartSLOduration=2.917430289 podStartE2EDuration="2.917430289s" podCreationTimestamp="2026-01-23 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:54.913286276 +0000 UTC m=+5969.393649662" watchObservedRunningTime="2026-01-23 09:34:54.917430289 +0000 UTC m=+5969.397793665" Jan 23 09:34:55 crc kubenswrapper[4982]: I0123 09:34:55.779387 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q42jr" event={"ID":"ac556d5b-d326-4af7-b237-cdecbc452ce9","Type":"ContainerStarted","Data":"439a8ba3ae7930dfbb339f69f915700daee3f9cb4a4bf728edb5d3556463d003"} Jan 23 09:34:55 crc kubenswrapper[4982]: I0123 09:34:55.780060 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q42jr" event={"ID":"ac556d5b-d326-4af7-b237-cdecbc452ce9","Type":"ContainerStarted","Data":"b5cbb50de235b2068f487fa9c3129518523a28e2c05caf506e6ecfaa1d42e22e"} Jan 23 09:34:55 crc kubenswrapper[4982]: I0123 09:34:55.781299 4982 generic.go:334] "Generic (PLEG): container finished" podID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerID="d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6" exitCode=0 Jan 23 09:34:55 crc kubenswrapper[4982]: I0123 09:34:55.781348 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" event={"ID":"9d700614-24d2-4917-bfba-1fd08e4e62c6","Type":"ContainerDied","Data":"d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6"} Jan 23 09:34:55 crc kubenswrapper[4982]: I0123 09:34:55.847413 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-q42jr" podStartSLOduration=2.847387217 podStartE2EDuration="2.847387217s" podCreationTimestamp="2026-01-23 09:34:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:55.812966652 +0000 UTC m=+5970.293330068" watchObservedRunningTime="2026-01-23 09:34:55.847387217 +0000 UTC m=+5970.327750603" Jan 23 09:34:56 crc kubenswrapper[4982]: I0123 09:34:56.793401 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" event={"ID":"9d700614-24d2-4917-bfba-1fd08e4e62c6","Type":"ContainerStarted","Data":"c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde"} Jan 23 09:34:56 crc kubenswrapper[4982]: I0123 09:34:56.793760 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:34:56 crc kubenswrapper[4982]: I0123 09:34:56.821455 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" podStartSLOduration=4.821433142 podStartE2EDuration="4.821433142s" podCreationTimestamp="2026-01-23 09:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:34:56.812346195 +0000 UTC m=+5971.292709581" watchObservedRunningTime="2026-01-23 09:34:56.821433142 +0000 UTC m=+5971.301796528" Jan 23 09:34:57 crc kubenswrapper[4982]: I0123 09:34:57.601118 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:57 crc kubenswrapper[4982]: I0123 09:34:57.601627 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-log" containerID="cri-o://4f0bd6446c2e09578acf7590e7a8ead2e77cbd78e6dd5bcde5ff2d51a43f95af" gracePeriod=30 Jan 23 09:34:57 crc kubenswrapper[4982]: I0123 09:34:57.601752 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-metadata" containerID="cri-o://bc737ba8e930f8916c982a8d6d1504d9bb815ded2e4257843971c9a97795188c" gracePeriod=30 Jan 23 09:34:57 crc kubenswrapper[4982]: I0123 09:34:57.624580 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:57 crc kubenswrapper[4982]: I0123 09:34:57.624792 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://90c08413908ba442d9ec1a69270ee2c39da0354f4763b91763a46904084bb3f8" gracePeriod=30 Jan 23 09:34:57 crc kubenswrapper[4982]: I0123 09:34:57.803520 4982 generic.go:334] "Generic (PLEG): container finished" podID="106a0070-ce0a-4126-859c-64fff5dee001" containerID="4f0bd6446c2e09578acf7590e7a8ead2e77cbd78e6dd5bcde5ff2d51a43f95af" exitCode=143 Jan 23 09:34:57 crc kubenswrapper[4982]: I0123 09:34:57.803603 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"106a0070-ce0a-4126-859c-64fff5dee001","Type":"ContainerDied","Data":"4f0bd6446c2e09578acf7590e7a8ead2e77cbd78e6dd5bcde5ff2d51a43f95af"} Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.135102 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.135169 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.174391 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.193382 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.815601 4982 generic.go:334] "Generic (PLEG): container finished" podID="106a0070-ce0a-4126-859c-64fff5dee001" containerID="bc737ba8e930f8916c982a8d6d1504d9bb815ded2e4257843971c9a97795188c" exitCode=0 Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.816034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"106a0070-ce0a-4126-859c-64fff5dee001","Type":"ContainerDied","Data":"bc737ba8e930f8916c982a8d6d1504d9bb815ded2e4257843971c9a97795188c"} Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.816150 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"106a0070-ce0a-4126-859c-64fff5dee001","Type":"ContainerDied","Data":"52e1564d9ac424d25e6114316c57cdaad382956ba7f60bae7ae66c6acb35303b"} Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.816162 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52e1564d9ac424d25e6114316c57cdaad382956ba7f60bae7ae66c6acb35303b" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.817759 4982 generic.go:334] "Generic (PLEG): container finished" podID="ac556d5b-d326-4af7-b237-cdecbc452ce9" containerID="439a8ba3ae7930dfbb339f69f915700daee3f9cb4a4bf728edb5d3556463d003" exitCode=0 Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.817803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q42jr" event={"ID":"ac556d5b-d326-4af7-b237-cdecbc452ce9","Type":"ContainerDied","Data":"439a8ba3ae7930dfbb339f69f915700daee3f9cb4a4bf728edb5d3556463d003"} Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.819808 4982 generic.go:334] "Generic (PLEG): container finished" podID="2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" containerID="90c08413908ba442d9ec1a69270ee2c39da0354f4763b91763a46904084bb3f8" exitCode=0 Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.820762 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8","Type":"ContainerDied","Data":"90c08413908ba442d9ec1a69270ee2c39da0354f4763b91763a46904084bb3f8"} Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.820788 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8","Type":"ContainerDied","Data":"af0e237f1ddd5dc035926be506f06dca7c5402db99f1ab9e40d8c0d7d843bd09"} Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.820800 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af0e237f1ddd5dc035926be506f06dca7c5402db99f1ab9e40d8c0d7d843bd09" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.833051 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.838625 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.981690 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-config-data\") pod \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.981747 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-combined-ca-bundle\") pod \"106a0070-ce0a-4126-859c-64fff5dee001\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.981791 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffd8g\" (UniqueName: \"kubernetes.io/projected/106a0070-ce0a-4126-859c-64fff5dee001-kube-api-access-ffd8g\") pod \"106a0070-ce0a-4126-859c-64fff5dee001\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.981967 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/106a0070-ce0a-4126-859c-64fff5dee001-logs\") pod \"106a0070-ce0a-4126-859c-64fff5dee001\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.982064 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-config-data\") pod \"106a0070-ce0a-4126-859c-64fff5dee001\" (UID: \"106a0070-ce0a-4126-859c-64fff5dee001\") " Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.982096 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-combined-ca-bundle\") pod \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.982135 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mwn5\" (UniqueName: \"kubernetes.io/projected/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-kube-api-access-7mwn5\") pod \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\" (UID: \"2462b7eb-b749-4f03-bb3c-d6be4e46f8d8\") " Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.982593 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106a0070-ce0a-4126-859c-64fff5dee001-logs" (OuterVolumeSpecName: "logs") pod "106a0070-ce0a-4126-859c-64fff5dee001" (UID: "106a0070-ce0a-4126-859c-64fff5dee001"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:34:58 crc kubenswrapper[4982]: I0123 09:34:58.982728 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/106a0070-ce0a-4126-859c-64fff5dee001-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.006916 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-kube-api-access-7mwn5" (OuterVolumeSpecName: "kube-api-access-7mwn5") pod "2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" (UID: "2462b7eb-b749-4f03-bb3c-d6be4e46f8d8"). InnerVolumeSpecName "kube-api-access-7mwn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.007069 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106a0070-ce0a-4126-859c-64fff5dee001-kube-api-access-ffd8g" (OuterVolumeSpecName: "kube-api-access-ffd8g") pod "106a0070-ce0a-4126-859c-64fff5dee001" (UID: "106a0070-ce0a-4126-859c-64fff5dee001"). InnerVolumeSpecName "kube-api-access-ffd8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.015599 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" (UID: "2462b7eb-b749-4f03-bb3c-d6be4e46f8d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.016233 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "106a0070-ce0a-4126-859c-64fff5dee001" (UID: "106a0070-ce0a-4126-859c-64fff5dee001"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.016771 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-config-data" (OuterVolumeSpecName: "config-data") pod "2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" (UID: "2462b7eb-b749-4f03-bb3c-d6be4e46f8d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.024026 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-config-data" (OuterVolumeSpecName: "config-data") pod "106a0070-ce0a-4126-859c-64fff5dee001" (UID: "106a0070-ce0a-4126-859c-64fff5dee001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.084098 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.084146 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mwn5\" (UniqueName: \"kubernetes.io/projected/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-kube-api-access-7mwn5\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.084160 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.084171 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.084180 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffd8g\" (UniqueName: \"kubernetes.io/projected/106a0070-ce0a-4126-859c-64fff5dee001-kube-api-access-ffd8g\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.084202 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106a0070-ce0a-4126-859c-64fff5dee001-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.838003 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.838016 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.893698 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.913157 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.932273 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.948701 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.964152 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:34:59 crc kubenswrapper[4982]: E0123 09:34:59.964707 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-metadata" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.964724 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-metadata" Jan 23 09:34:59 crc kubenswrapper[4982]: E0123 09:34:59.964750 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-log" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.964756 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-log" Jan 23 09:34:59 crc kubenswrapper[4982]: E0123 09:34:59.964770 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.964776 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.964998 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-metadata" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.965012 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="106a0070-ce0a-4126-859c-64fff5dee001" containerName="nova-metadata-log" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.965019 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.965718 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.972393 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.972461 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.972567 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 09:34:59 crc kubenswrapper[4982]: I0123 09:34:59.985932 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.000428 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.005421 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.006135 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.006188 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.035052 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119432 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119497 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d72ae6e3-2745-4199-b115-5ec3a4b966b7-logs\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119585 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119662 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119741 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119773 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-config-data\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119791 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67q7\" (UniqueName: \"kubernetes.io/projected/4aa6ed7d-f756-4ae3-b708-0f172751af6d-kube-api-access-z67q7\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119819 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skcms\" (UniqueName: \"kubernetes.io/projected/d72ae6e3-2745-4199-b115-5ec3a4b966b7-kube-api-access-skcms\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119852 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.119873 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d72ae6e3-2745-4199-b115-5ec3a4b966b7-logs\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222565 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222603 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222693 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222731 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-config-data\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222758 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67q7\" (UniqueName: \"kubernetes.io/projected/4aa6ed7d-f756-4ae3-b708-0f172751af6d-kube-api-access-z67q7\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222794 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skcms\" (UniqueName: \"kubernetes.io/projected/d72ae6e3-2745-4199-b115-5ec3a4b966b7-kube-api-access-skcms\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222831 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222858 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.222933 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.223056 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d72ae6e3-2745-4199-b115-5ec3a4b966b7-logs\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.229118 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.229155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.229889 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-config-data\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.230192 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.232446 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.238815 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.241326 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa6ed7d-f756-4ae3-b708-0f172751af6d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.245084 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skcms\" (UniqueName: \"kubernetes.io/projected/d72ae6e3-2745-4199-b115-5ec3a4b966b7-kube-api-access-skcms\") pod \"nova-metadata-0\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.245246 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67q7\" (UniqueName: \"kubernetes.io/projected/4aa6ed7d-f756-4ae3-b708-0f172751af6d-kube-api-access-z67q7\") pod \"nova-cell1-novncproxy-0\" (UID: \"4aa6ed7d-f756-4ae3-b708-0f172751af6d\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.290129 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.327436 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.345476 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.428913 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-combined-ca-bundle\") pod \"ac556d5b-d326-4af7-b237-cdecbc452ce9\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.429074 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq9f8\" (UniqueName: \"kubernetes.io/projected/ac556d5b-d326-4af7-b237-cdecbc452ce9-kube-api-access-gq9f8\") pod \"ac556d5b-d326-4af7-b237-cdecbc452ce9\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.429319 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-scripts\") pod \"ac556d5b-d326-4af7-b237-cdecbc452ce9\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.429360 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-config-data\") pod \"ac556d5b-d326-4af7-b237-cdecbc452ce9\" (UID: \"ac556d5b-d326-4af7-b237-cdecbc452ce9\") " Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.441163 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac556d5b-d326-4af7-b237-cdecbc452ce9-kube-api-access-gq9f8" (OuterVolumeSpecName: "kube-api-access-gq9f8") pod "ac556d5b-d326-4af7-b237-cdecbc452ce9" (UID: "ac556d5b-d326-4af7-b237-cdecbc452ce9"). InnerVolumeSpecName "kube-api-access-gq9f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.450549 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-scripts" (OuterVolumeSpecName: "scripts") pod "ac556d5b-d326-4af7-b237-cdecbc452ce9" (UID: "ac556d5b-d326-4af7-b237-cdecbc452ce9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.474530 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-config-data" (OuterVolumeSpecName: "config-data") pod "ac556d5b-d326-4af7-b237-cdecbc452ce9" (UID: "ac556d5b-d326-4af7-b237-cdecbc452ce9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.484959 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac556d5b-d326-4af7-b237-cdecbc452ce9" (UID: "ac556d5b-d326-4af7-b237-cdecbc452ce9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.535341 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.535378 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq9f8\" (UniqueName: \"kubernetes.io/projected/ac556d5b-d326-4af7-b237-cdecbc452ce9-kube-api-access-gq9f8\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.535390 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.535400 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac556d5b-d326-4af7-b237-cdecbc452ce9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.849051 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-q42jr" event={"ID":"ac556d5b-d326-4af7-b237-cdecbc452ce9","Type":"ContainerDied","Data":"b5cbb50de235b2068f487fa9c3129518523a28e2c05caf506e6ecfaa1d42e22e"} Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.849406 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5cbb50de235b2068f487fa9c3129518523a28e2c05caf506e6ecfaa1d42e22e" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.849100 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-q42jr" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.850558 4982 generic.go:334] "Generic (PLEG): container finished" podID="7c5f2d4c-e155-4920-bce0-17b75d698fae" containerID="d2c6d4d10787121beff78454c0f54383da5e697a04b7d9e1ae30f1c35fa80dd2" exitCode=0 Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.850589 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zp2tp" event={"ID":"7c5f2d4c-e155-4920-bce0-17b75d698fae","Type":"ContainerDied","Data":"d2c6d4d10787121beff78454c0f54383da5e697a04b7d9e1ae30f1c35fa80dd2"} Jan 23 09:35:00 crc kubenswrapper[4982]: W0123 09:35:00.893377 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4aa6ed7d_f756_4ae3_b708_0f172751af6d.slice/crio-e3200f458611e872c0287746eb161adb092aebc909140ec3d79bd8b57812fa72 WatchSource:0}: Error finding container e3200f458611e872c0287746eb161adb092aebc909140ec3d79bd8b57812fa72: Status 404 returned error can't find the container with id e3200f458611e872c0287746eb161adb092aebc909140ec3d79bd8b57812fa72 Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.896813 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.908195 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.971061 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 09:35:00 crc kubenswrapper[4982]: E0123 09:35:00.971520 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac556d5b-d326-4af7-b237-cdecbc452ce9" containerName="nova-cell1-conductor-db-sync" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.971533 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac556d5b-d326-4af7-b237-cdecbc452ce9" containerName="nova-cell1-conductor-db-sync" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.971852 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac556d5b-d326-4af7-b237-cdecbc452ce9" containerName="nova-cell1-conductor-db-sync" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.972559 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.975445 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 09:35:00 crc kubenswrapper[4982]: I0123 09:35:00.983967 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.045070 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17176ca4-3624-4cef-95d5-0f36fa599d49-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.045135 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd5mv\" (UniqueName: \"kubernetes.io/projected/17176ca4-3624-4cef-95d5-0f36fa599d49-kube-api-access-dd5mv\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.045252 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17176ca4-3624-4cef-95d5-0f36fa599d49-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.147022 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17176ca4-3624-4cef-95d5-0f36fa599d49-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.147350 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17176ca4-3624-4cef-95d5-0f36fa599d49-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.147460 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd5mv\" (UniqueName: \"kubernetes.io/projected/17176ca4-3624-4cef-95d5-0f36fa599d49-kube-api-access-dd5mv\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.152154 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17176ca4-3624-4cef-95d5-0f36fa599d49-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.154235 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17176ca4-3624-4cef-95d5-0f36fa599d49-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.164411 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd5mv\" (UniqueName: \"kubernetes.io/projected/17176ca4-3624-4cef-95d5-0f36fa599d49-kube-api-access-dd5mv\") pod \"nova-cell1-conductor-0\" (UID: \"17176ca4-3624-4cef-95d5-0f36fa599d49\") " pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.312943 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.708491 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-slksn"] Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.712889 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.726693 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-slksn"] Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.767040 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-catalog-content\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.767355 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-utilities\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.767463 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrh64\" (UniqueName: \"kubernetes.io/projected/258efe66-cd66-4684-830a-1b801004d246-kube-api-access-wrh64\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.826042 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.869031 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-catalog-content\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.870432 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-utilities\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.870537 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrh64\" (UniqueName: \"kubernetes.io/projected/258efe66-cd66-4684-830a-1b801004d246-kube-api-access-wrh64\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.869447 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-catalog-content\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.871431 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-utilities\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.878202 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106a0070-ce0a-4126-859c-64fff5dee001" path="/var/lib/kubelet/pods/106a0070-ce0a-4126-859c-64fff5dee001/volumes" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.879158 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2462b7eb-b749-4f03-bb3c-d6be4e46f8d8" path="/var/lib/kubelet/pods/2462b7eb-b749-4f03-bb3c-d6be4e46f8d8/volumes" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.900311 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrh64\" (UniqueName: \"kubernetes.io/projected/258efe66-cd66-4684-830a-1b801004d246-kube-api-access-wrh64\") pod \"redhat-marketplace-slksn\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.912563 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"17176ca4-3624-4cef-95d5-0f36fa599d49","Type":"ContainerStarted","Data":"3a666ef8391b6a1ae03cd2ba6a53fcf9c81fe6e0329ce63d9a38280b9c8dfd60"} Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.929147 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4aa6ed7d-f756-4ae3-b708-0f172751af6d","Type":"ContainerStarted","Data":"90d0383e6166b0d34e1a2fefbfaf16bfcf229138f6c890cbc2fdbafa31fd6e33"} Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.929199 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4aa6ed7d-f756-4ae3-b708-0f172751af6d","Type":"ContainerStarted","Data":"e3200f458611e872c0287746eb161adb092aebc909140ec3d79bd8b57812fa72"} Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.956838 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d72ae6e3-2745-4199-b115-5ec3a4b966b7","Type":"ContainerStarted","Data":"db92d01d4aa501baaa59ba2b0fa965a0287133c980c7b5451e8c007c76ed0481"} Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.956906 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d72ae6e3-2745-4199-b115-5ec3a4b966b7","Type":"ContainerStarted","Data":"9e54f794be351916b9949dd551e9010e437964903e72c4c5cfda895b0941792c"} Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.956922 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d72ae6e3-2745-4199-b115-5ec3a4b966b7","Type":"ContainerStarted","Data":"5cf5da6b1b06e852d10bf45d3de6ec05d99dc546a423a70bd6b53567413a5529"} Jan 23 09:35:01 crc kubenswrapper[4982]: I0123 09:35:01.990586 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.9905602 podStartE2EDuration="2.9905602s" podCreationTimestamp="2026-01-23 09:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:01.971810861 +0000 UTC m=+5976.452174247" watchObservedRunningTime="2026-01-23 09:35:01.9905602 +0000 UTC m=+5976.470923586" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.011851 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.011799207 podStartE2EDuration="3.011799207s" podCreationTimestamp="2026-01-23 09:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:01.999893533 +0000 UTC m=+5976.480256919" watchObservedRunningTime="2026-01-23 09:35:02.011799207 +0000 UTC m=+5976.492162603" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.043497 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.429428 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.491677 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6sf8\" (UniqueName: \"kubernetes.io/projected/7c5f2d4c-e155-4920-bce0-17b75d698fae-kube-api-access-j6sf8\") pod \"7c5f2d4c-e155-4920-bce0-17b75d698fae\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.491821 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-config-data\") pod \"7c5f2d4c-e155-4920-bce0-17b75d698fae\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.492054 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-scripts\") pod \"7c5f2d4c-e155-4920-bce0-17b75d698fae\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.492149 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-combined-ca-bundle\") pod \"7c5f2d4c-e155-4920-bce0-17b75d698fae\" (UID: \"7c5f2d4c-e155-4920-bce0-17b75d698fae\") " Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.500022 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-scripts" (OuterVolumeSpecName: "scripts") pod "7c5f2d4c-e155-4920-bce0-17b75d698fae" (UID: "7c5f2d4c-e155-4920-bce0-17b75d698fae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.542757 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c5f2d4c-e155-4920-bce0-17b75d698fae" (UID: "7c5f2d4c-e155-4920-bce0-17b75d698fae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.543457 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5f2d4c-e155-4920-bce0-17b75d698fae-kube-api-access-j6sf8" (OuterVolumeSpecName: "kube-api-access-j6sf8") pod "7c5f2d4c-e155-4920-bce0-17b75d698fae" (UID: "7c5f2d4c-e155-4920-bce0-17b75d698fae"). InnerVolumeSpecName "kube-api-access-j6sf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.545773 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-config-data" (OuterVolumeSpecName: "config-data") pod "7c5f2d4c-e155-4920-bce0-17b75d698fae" (UID: "7c5f2d4c-e155-4920-bce0-17b75d698fae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.597870 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.597907 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.597917 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c5f2d4c-e155-4920-bce0-17b75d698fae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.597939 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6sf8\" (UniqueName: \"kubernetes.io/projected/7c5f2d4c-e155-4920-bce0-17b75d698fae-kube-api-access-j6sf8\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.605600 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-slksn"] Jan 23 09:35:02 crc kubenswrapper[4982]: W0123 09:35:02.606894 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod258efe66_cd66_4684_830a_1b801004d246.slice/crio-c2f2967689d365bff6479c573b0113de4378e3ba2226d4acbf152de43fbcf9f4 WatchSource:0}: Error finding container c2f2967689d365bff6479c573b0113de4378e3ba2226d4acbf152de43fbcf9f4: Status 404 returned error can't find the container with id c2f2967689d365bff6479c573b0113de4378e3ba2226d4acbf152de43fbcf9f4 Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.966909 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"17176ca4-3624-4cef-95d5-0f36fa599d49","Type":"ContainerStarted","Data":"d122c3106ad499502ddafbed6027d3919c53a39c7b1e9c2ff9bb6d37e6dd91e7"} Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.968000 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.969332 4982 generic.go:334] "Generic (PLEG): container finished" podID="258efe66-cd66-4684-830a-1b801004d246" containerID="2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856" exitCode=0 Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.969366 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slksn" event={"ID":"258efe66-cd66-4684-830a-1b801004d246","Type":"ContainerDied","Data":"2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856"} Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.969394 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slksn" event={"ID":"258efe66-cd66-4684-830a-1b801004d246","Type":"ContainerStarted","Data":"c2f2967689d365bff6479c573b0113de4378e3ba2226d4acbf152de43fbcf9f4"} Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.971845 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.971869 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zp2tp" event={"ID":"7c5f2d4c-e155-4920-bce0-17b75d698fae","Type":"ContainerDied","Data":"3222636d24c99f4b7f68838e372dbe157bcc55319488fa8cf6e83057130229c9"} Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.971898 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3222636d24c99f4b7f68838e372dbe157bcc55319488fa8cf6e83057130229c9" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.971911 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zp2tp" Jan 23 09:35:02 crc kubenswrapper[4982]: I0123 09:35:02.999598 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.999569746 podStartE2EDuration="2.999569746s" podCreationTimestamp="2026-01-23 09:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:02.990698695 +0000 UTC m=+5977.471062101" watchObservedRunningTime="2026-01-23 09:35:02.999569746 +0000 UTC m=+5977.479933132" Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.004894 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.004988 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.080255 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.102440 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.102703 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="02c3dd55-c179-4b2f-acfd-97f8150bba0f" containerName="nova-scheduler-scheduler" containerID="cri-o://07256719b9e57c70f2664360d43994f6b6e4aca95099e2b0afabd7d1a167f4f3" gracePeriod=30 Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.117589 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.224799 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.322031 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8df46d887-g2k4g"] Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.322218 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" podUID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerName="dnsmasq-dns" containerID="cri-o://c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e" gracePeriod=10 Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.829503 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:35:03 crc kubenswrapper[4982]: E0123 09:35:03.830049 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:35:03 crc kubenswrapper[4982]: I0123 09:35:03.987233 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.012127 4982 generic.go:334] "Generic (PLEG): container finished" podID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerID="c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e" exitCode=0 Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.013485 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.013908 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" event={"ID":"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8","Type":"ContainerDied","Data":"c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e"} Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.014023 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8df46d887-g2k4g" event={"ID":"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8","Type":"ContainerDied","Data":"371c31808d9744569b1d9922126fe68aeaddf82e5f3630b971dc0a197cdb910a"} Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.014139 4982 scope.go:117] "RemoveContainer" containerID="c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.014450 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-log" containerID="cri-o://cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.014995 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-log" containerID="cri-o://9e54f794be351916b9949dd551e9010e437964903e72c4c5cfda895b0941792c" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.015164 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-api" containerID="cri-o://7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.015355 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-metadata" containerID="cri-o://db92d01d4aa501baaa59ba2b0fa965a0287133c980c7b5451e8c007c76ed0481" gracePeriod=30 Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.039158 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.97:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.039317 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.97:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.085946 4982 scope.go:117] "RemoveContainer" containerID="35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.129818 4982 scope.go:117] "RemoveContainer" containerID="c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e" Jan 23 09:35:04 crc kubenswrapper[4982]: E0123 09:35:04.130713 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e\": container with ID starting with c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e not found: ID does not exist" containerID="c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.130863 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e"} err="failed to get container status \"c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e\": rpc error: code = NotFound desc = could not find container \"c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e\": container with ID starting with c8390de621be057edbccc452746374e263ebfccf713e78365a13a07db165298e not found: ID does not exist" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.130986 4982 scope.go:117] "RemoveContainer" containerID="35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3" Jan 23 09:35:04 crc kubenswrapper[4982]: E0123 09:35:04.131411 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3\": container with ID starting with 35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3 not found: ID does not exist" containerID="35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.131544 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3"} err="failed to get container status \"35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3\": rpc error: code = NotFound desc = could not find container \"35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3\": container with ID starting with 35ebbaea13c9374d1f7b95c99cd47345404ee87ec2544f6f455feae2a46c8fd3 not found: ID does not exist" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.142982 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-dns-svc\") pod \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.143274 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-nb\") pod \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.143428 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2hxd\" (UniqueName: \"kubernetes.io/projected/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-kube-api-access-b2hxd\") pod \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.143522 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-sb\") pod \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.143878 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-config\") pod \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\" (UID: \"bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8\") " Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.162313 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-kube-api-access-b2hxd" (OuterVolumeSpecName: "kube-api-access-b2hxd") pod "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" (UID: "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8"). InnerVolumeSpecName "kube-api-access-b2hxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.213918 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" (UID: "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.224686 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-config" (OuterVolumeSpecName: "config") pod "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" (UID: "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.241119 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" (UID: "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.248119 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2hxd\" (UniqueName: \"kubernetes.io/projected/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-kube-api-access-b2hxd\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.248145 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.248154 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.248163 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.255765 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" (UID: "bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.291763 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x22p8"] Jan 23 09:35:04 crc kubenswrapper[4982]: E0123 09:35:04.292291 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5f2d4c-e155-4920-bce0-17b75d698fae" containerName="nova-manage" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.292308 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5f2d4c-e155-4920-bce0-17b75d698fae" containerName="nova-manage" Jan 23 09:35:04 crc kubenswrapper[4982]: E0123 09:35:04.292331 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerName="init" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.292337 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerName="init" Jan 23 09:35:04 crc kubenswrapper[4982]: E0123 09:35:04.292354 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerName="dnsmasq-dns" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.292360 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerName="dnsmasq-dns" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.292549 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" containerName="dnsmasq-dns" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.292577 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5f2d4c-e155-4920-bce0-17b75d698fae" containerName="nova-manage" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.294616 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.314466 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x22p8"] Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.352755 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.446865 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8df46d887-g2k4g"] Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.456771 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-catalog-content\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.456822 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-utilities\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.457010 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2xlf\" (UniqueName: \"kubernetes.io/projected/6a6bef32-d56e-4fb3-977d-452837e185a8-kube-api-access-f2xlf\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.464091 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8df46d887-g2k4g"] Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.560252 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2xlf\" (UniqueName: \"kubernetes.io/projected/6a6bef32-d56e-4fb3-977d-452837e185a8-kube-api-access-f2xlf\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.560677 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-catalog-content\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.560728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-utilities\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.561230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-utilities\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.561268 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-catalog-content\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.585254 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2xlf\" (UniqueName: \"kubernetes.io/projected/6a6bef32-d56e-4fb3-977d-452837e185a8-kube-api-access-f2xlf\") pod \"certified-operators-x22p8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:04 crc kubenswrapper[4982]: I0123 09:35:04.694546 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.053975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slksn" event={"ID":"258efe66-cd66-4684-830a-1b801004d246","Type":"ContainerStarted","Data":"0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60"} Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.072767 4982 generic.go:334] "Generic (PLEG): container finished" podID="9ce14684-fad6-4869-bf34-935ea9637b04" containerID="cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2" exitCode=143 Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.072849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce14684-fad6-4869-bf34-935ea9637b04","Type":"ContainerDied","Data":"cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2"} Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.082473 4982 generic.go:334] "Generic (PLEG): container finished" podID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerID="db92d01d4aa501baaa59ba2b0fa965a0287133c980c7b5451e8c007c76ed0481" exitCode=0 Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.082507 4982 generic.go:334] "Generic (PLEG): container finished" podID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerID="9e54f794be351916b9949dd551e9010e437964903e72c4c5cfda895b0941792c" exitCode=143 Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.082531 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d72ae6e3-2745-4199-b115-5ec3a4b966b7","Type":"ContainerDied","Data":"db92d01d4aa501baaa59ba2b0fa965a0287133c980c7b5451e8c007c76ed0481"} Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.082564 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d72ae6e3-2745-4199-b115-5ec3a4b966b7","Type":"ContainerDied","Data":"9e54f794be351916b9949dd551e9010e437964903e72c4c5cfda895b0941792c"} Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.291077 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.345816 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.345884 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.380116 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x22p8"] Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.597478 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.695718 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d72ae6e3-2745-4199-b115-5ec3a4b966b7-logs\") pod \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.695816 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-combined-ca-bundle\") pod \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.695850 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-config-data\") pod \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.695967 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-nova-metadata-tls-certs\") pod \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.695987 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skcms\" (UniqueName: \"kubernetes.io/projected/d72ae6e3-2745-4199-b115-5ec3a4b966b7-kube-api-access-skcms\") pod \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\" (UID: \"d72ae6e3-2745-4199-b115-5ec3a4b966b7\") " Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.696054 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72ae6e3-2745-4199-b115-5ec3a4b966b7-logs" (OuterVolumeSpecName: "logs") pod "d72ae6e3-2745-4199-b115-5ec3a4b966b7" (UID: "d72ae6e3-2745-4199-b115-5ec3a4b966b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.696423 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d72ae6e3-2745-4199-b115-5ec3a4b966b7-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.704939 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72ae6e3-2745-4199-b115-5ec3a4b966b7-kube-api-access-skcms" (OuterVolumeSpecName: "kube-api-access-skcms") pod "d72ae6e3-2745-4199-b115-5ec3a4b966b7" (UID: "d72ae6e3-2745-4199-b115-5ec3a4b966b7"). InnerVolumeSpecName "kube-api-access-skcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.729832 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d72ae6e3-2745-4199-b115-5ec3a4b966b7" (UID: "d72ae6e3-2745-4199-b115-5ec3a4b966b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.801235 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.801270 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skcms\" (UniqueName: \"kubernetes.io/projected/d72ae6e3-2745-4199-b115-5ec3a4b966b7-kube-api-access-skcms\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.803815 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-config-data" (OuterVolumeSpecName: "config-data") pod "d72ae6e3-2745-4199-b115-5ec3a4b966b7" (UID: "d72ae6e3-2745-4199-b115-5ec3a4b966b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.806732 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d72ae6e3-2745-4199-b115-5ec3a4b966b7" (UID: "d72ae6e3-2745-4199-b115-5ec3a4b966b7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.841025 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8" path="/var/lib/kubelet/pods/bc4dfa1c-6e80-4c38-acc2-72eefdf4c2d8/volumes" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.902921 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:05 crc kubenswrapper[4982]: I0123 09:35:05.902965 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d72ae6e3-2745-4199-b115-5ec3a4b966b7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.092851 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x22p8" event={"ID":"6a6bef32-d56e-4fb3-977d-452837e185a8","Type":"ContainerStarted","Data":"abb499387c5de84827c6b414fa2beca99d48250e477b924d4902eb20583d84a9"} Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.095730 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d72ae6e3-2745-4199-b115-5ec3a4b966b7","Type":"ContainerDied","Data":"5cf5da6b1b06e852d10bf45d3de6ec05d99dc546a423a70bd6b53567413a5529"} Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.095802 4982 scope.go:117] "RemoveContainer" containerID="db92d01d4aa501baaa59ba2b0fa965a0287133c980c7b5451e8c007c76ed0481" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.095954 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.121852 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.146831 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.148885 4982 scope.go:117] "RemoveContainer" containerID="9e54f794be351916b9949dd551e9010e437964903e72c4c5cfda895b0941792c" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.173895 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:06 crc kubenswrapper[4982]: E0123 09:35:06.174455 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-log" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.174469 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-log" Jan 23 09:35:06 crc kubenswrapper[4982]: E0123 09:35:06.174482 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-metadata" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.174489 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-metadata" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.174714 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-log" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.174740 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" containerName="nova-metadata-metadata" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.175917 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.179702 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.180226 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.188243 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.311985 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98163aaa-b1a2-4388-bfcb-ec922843a523-logs\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.312616 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.312968 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.313266 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/98163aaa-b1a2-4388-bfcb-ec922843a523-kube-api-access-qhx84\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.313525 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-config-data\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.419304 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.420539 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/98163aaa-b1a2-4388-bfcb-ec922843a523-kube-api-access-qhx84\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.420720 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-config-data\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.421033 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98163aaa-b1a2-4388-bfcb-ec922843a523-logs\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.421329 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.424136 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98163aaa-b1a2-4388-bfcb-ec922843a523-logs\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.425800 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.437600 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-config-data\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.440229 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/98163aaa-b1a2-4388-bfcb-ec922843a523-kube-api-access-qhx84\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.445477 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.492194 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:06 crc kubenswrapper[4982]: I0123 09:35:06.938848 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:07 crc kubenswrapper[4982]: I0123 09:35:07.106816 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98163aaa-b1a2-4388-bfcb-ec922843a523","Type":"ContainerStarted","Data":"be54e344c3f53a7cd042f796e19c4ef665782b1ce1ecf67fc65bc61e4876cc4e"} Jan 23 09:35:07 crc kubenswrapper[4982]: I0123 09:35:07.113913 4982 generic.go:334] "Generic (PLEG): container finished" podID="258efe66-cd66-4684-830a-1b801004d246" containerID="0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60" exitCode=0 Jan 23 09:35:07 crc kubenswrapper[4982]: I0123 09:35:07.114006 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slksn" event={"ID":"258efe66-cd66-4684-830a-1b801004d246","Type":"ContainerDied","Data":"0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60"} Jan 23 09:35:07 crc kubenswrapper[4982]: I0123 09:35:07.116697 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerID="8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974" exitCode=0 Jan 23 09:35:07 crc kubenswrapper[4982]: I0123 09:35:07.116740 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x22p8" event={"ID":"6a6bef32-d56e-4fb3-977d-452837e185a8","Type":"ContainerDied","Data":"8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974"} Jan 23 09:35:07 crc kubenswrapper[4982]: I0123 09:35:07.841443 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72ae6e3-2745-4199-b115-5ec3a4b966b7" path="/var/lib/kubelet/pods/d72ae6e3-2745-4199-b115-5ec3a4b966b7/volumes" Jan 23 09:35:08 crc kubenswrapper[4982]: I0123 09:35:08.128182 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98163aaa-b1a2-4388-bfcb-ec922843a523","Type":"ContainerStarted","Data":"a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac"} Jan 23 09:35:08 crc kubenswrapper[4982]: I0123 09:35:08.128234 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98163aaa-b1a2-4388-bfcb-ec922843a523","Type":"ContainerStarted","Data":"2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe"} Jan 23 09:35:08 crc kubenswrapper[4982]: I0123 09:35:08.134343 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slksn" event={"ID":"258efe66-cd66-4684-830a-1b801004d246","Type":"ContainerStarted","Data":"96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f"} Jan 23 09:35:08 crc kubenswrapper[4982]: I0123 09:35:08.153973 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.153924801 podStartE2EDuration="2.153924801s" podCreationTimestamp="2026-01-23 09:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:08.14505521 +0000 UTC m=+5982.625418596" watchObservedRunningTime="2026-01-23 09:35:08.153924801 +0000 UTC m=+5982.634288187" Jan 23 09:35:08 crc kubenswrapper[4982]: I0123 09:35:08.178390 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-slksn" podStartSLOduration=2.641218473 podStartE2EDuration="7.178368705s" podCreationTimestamp="2026-01-23 09:35:01 +0000 UTC" firstStartedPulling="2026-01-23 09:35:02.971568465 +0000 UTC m=+5977.451931851" lastFinishedPulling="2026-01-23 09:35:07.508718697 +0000 UTC m=+5981.989082083" observedRunningTime="2026-01-23 09:35:08.171308053 +0000 UTC m=+5982.651671459" watchObservedRunningTime="2026-01-23 09:35:08.178368705 +0000 UTC m=+5982.658732091" Jan 23 09:35:09 crc kubenswrapper[4982]: I0123 09:35:09.145657 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerID="9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806" exitCode=0 Jan 23 09:35:09 crc kubenswrapper[4982]: I0123 09:35:09.145782 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x22p8" event={"ID":"6a6bef32-d56e-4fb3-977d-452837e185a8","Type":"ContainerDied","Data":"9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806"} Jan 23 09:35:10 crc kubenswrapper[4982]: I0123 09:35:10.291872 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:10 crc kubenswrapper[4982]: I0123 09:35:10.309562 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.189099 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x22p8" event={"ID":"6a6bef32-d56e-4fb3-977d-452837e185a8","Type":"ContainerStarted","Data":"220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4"} Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.213664 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.217444 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x22p8" podStartSLOduration=4.389463999 podStartE2EDuration="7.217426618s" podCreationTimestamp="2026-01-23 09:35:04 +0000 UTC" firstStartedPulling="2026-01-23 09:35:07.118716045 +0000 UTC m=+5981.599079431" lastFinishedPulling="2026-01-23 09:35:09.946678664 +0000 UTC m=+5984.427042050" observedRunningTime="2026-01-23 09:35:11.209555494 +0000 UTC m=+5985.689918880" watchObservedRunningTime="2026-01-23 09:35:11.217426618 +0000 UTC m=+5985.697790004" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.342001 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.493071 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.494223 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.764355 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l5mm5"] Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.766055 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.769749 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.771682 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.779455 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5mm5"] Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.844074 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-config-data\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.844485 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbn4\" (UniqueName: \"kubernetes.io/projected/a9678e71-330e-4862-a9a6-c4280299fb1b-kube-api-access-8pbn4\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.844736 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.844796 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-scripts\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.947213 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbn4\" (UniqueName: \"kubernetes.io/projected/a9678e71-330e-4862-a9a6-c4280299fb1b-kube-api-access-8pbn4\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.947313 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.947340 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-scripts\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.947379 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-config-data\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.953458 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-scripts\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.956171 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.956259 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-config-data\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:11 crc kubenswrapper[4982]: I0123 09:35:11.964289 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbn4\" (UniqueName: \"kubernetes.io/projected/a9678e71-330e-4862-a9a6-c4280299fb1b-kube-api-access-8pbn4\") pod \"nova-cell1-cell-mapping-l5mm5\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:12 crc kubenswrapper[4982]: I0123 09:35:12.044707 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:12 crc kubenswrapper[4982]: I0123 09:35:12.044771 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:12 crc kubenswrapper[4982]: I0123 09:35:12.089193 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:12 crc kubenswrapper[4982]: I0123 09:35:12.113836 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:12 crc kubenswrapper[4982]: I0123 09:35:12.262732 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:12 crc kubenswrapper[4982]: I0123 09:35:12.585751 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5mm5"] Jan 23 09:35:12 crc kubenswrapper[4982]: W0123 09:35:12.591561 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9678e71_330e_4862_a9a6_c4280299fb1b.slice/crio-59adeae3289809cb02975111d1c582773ddc68a498dc2abdf94d5602855359ec WatchSource:0}: Error finding container 59adeae3289809cb02975111d1c582773ddc68a498dc2abdf94d5602855359ec: Status 404 returned error can't find the container with id 59adeae3289809cb02975111d1c582773ddc68a498dc2abdf94d5602855359ec Jan 23 09:35:13 crc kubenswrapper[4982]: I0123 09:35:13.211173 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5mm5" event={"ID":"a9678e71-330e-4862-a9a6-c4280299fb1b","Type":"ContainerStarted","Data":"c7815a8ad19d285d91ab37c1bfc6c55f94a202430afe9edf5a62f02503d68606"} Jan 23 09:35:13 crc kubenswrapper[4982]: I0123 09:35:13.211493 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5mm5" event={"ID":"a9678e71-330e-4862-a9a6-c4280299fb1b","Type":"ContainerStarted","Data":"59adeae3289809cb02975111d1c582773ddc68a498dc2abdf94d5602855359ec"} Jan 23 09:35:13 crc kubenswrapper[4982]: I0123 09:35:13.235452 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l5mm5" podStartSLOduration=2.235431219 podStartE2EDuration="2.235431219s" podCreationTimestamp="2026-01-23 09:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:13.228920742 +0000 UTC m=+5987.709284128" watchObservedRunningTime="2026-01-23 09:35:13.235431219 +0000 UTC m=+5987.715794605" Jan 23 09:35:14 crc kubenswrapper[4982]: I0123 09:35:14.280642 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-slksn"] Jan 23 09:35:14 crc kubenswrapper[4982]: I0123 09:35:14.281218 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-slksn" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="registry-server" containerID="cri-o://96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f" gracePeriod=2 Jan 23 09:35:14 crc kubenswrapper[4982]: I0123 09:35:14.695578 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:14 crc kubenswrapper[4982]: I0123 09:35:14.695676 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:14 crc kubenswrapper[4982]: I0123 09:35:14.746121 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:15 crc kubenswrapper[4982]: I0123 09:35:15.292377 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:15 crc kubenswrapper[4982]: I0123 09:35:15.924147 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.040565 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-utilities\") pod \"258efe66-cd66-4684-830a-1b801004d246\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.040683 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-catalog-content\") pod \"258efe66-cd66-4684-830a-1b801004d246\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.040886 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrh64\" (UniqueName: \"kubernetes.io/projected/258efe66-cd66-4684-830a-1b801004d246-kube-api-access-wrh64\") pod \"258efe66-cd66-4684-830a-1b801004d246\" (UID: \"258efe66-cd66-4684-830a-1b801004d246\") " Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.041429 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-utilities" (OuterVolumeSpecName: "utilities") pod "258efe66-cd66-4684-830a-1b801004d246" (UID: "258efe66-cd66-4684-830a-1b801004d246"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.041821 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.060893 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258efe66-cd66-4684-830a-1b801004d246-kube-api-access-wrh64" (OuterVolumeSpecName: "kube-api-access-wrh64") pod "258efe66-cd66-4684-830a-1b801004d246" (UID: "258efe66-cd66-4684-830a-1b801004d246"). InnerVolumeSpecName "kube-api-access-wrh64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.066081 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "258efe66-cd66-4684-830a-1b801004d246" (UID: "258efe66-cd66-4684-830a-1b801004d246"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.144851 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/258efe66-cd66-4684-830a-1b801004d246-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.144900 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrh64\" (UniqueName: \"kubernetes.io/projected/258efe66-cd66-4684-830a-1b801004d246-kube-api-access-wrh64\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.243265 4982 generic.go:334] "Generic (PLEG): container finished" podID="258efe66-cd66-4684-830a-1b801004d246" containerID="96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f" exitCode=0 Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.244421 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-slksn" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.247748 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slksn" event={"ID":"258efe66-cd66-4684-830a-1b801004d246","Type":"ContainerDied","Data":"96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f"} Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.247800 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-slksn" event={"ID":"258efe66-cd66-4684-830a-1b801004d246","Type":"ContainerDied","Data":"c2f2967689d365bff6479c573b0113de4378e3ba2226d4acbf152de43fbcf9f4"} Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.247823 4982 scope.go:117] "RemoveContainer" containerID="96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.286313 4982 scope.go:117] "RemoveContainer" containerID="0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.288434 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-slksn"] Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.299084 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-slksn"] Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.312700 4982 scope.go:117] "RemoveContainer" containerID="2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.356970 4982 scope.go:117] "RemoveContainer" containerID="96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f" Jan 23 09:35:16 crc kubenswrapper[4982]: E0123 09:35:16.357311 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f\": container with ID starting with 96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f not found: ID does not exist" containerID="96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.357350 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f"} err="failed to get container status \"96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f\": rpc error: code = NotFound desc = could not find container \"96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f\": container with ID starting with 96cedbfeca5a59ef26aa8cd60d4637f0bf26dcfa1c1bd205c9fd168428f5083f not found: ID does not exist" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.357377 4982 scope.go:117] "RemoveContainer" containerID="0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60" Jan 23 09:35:16 crc kubenswrapper[4982]: E0123 09:35:16.357728 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60\": container with ID starting with 0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60 not found: ID does not exist" containerID="0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.357755 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60"} err="failed to get container status \"0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60\": rpc error: code = NotFound desc = could not find container \"0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60\": container with ID starting with 0412812c9dc176e2e34fa4bd0f4eb02340c888432ba3cda2cb9a18a47ac2ce60 not found: ID does not exist" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.357775 4982 scope.go:117] "RemoveContainer" containerID="2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856" Jan 23 09:35:16 crc kubenswrapper[4982]: E0123 09:35:16.358074 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856\": container with ID starting with 2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856 not found: ID does not exist" containerID="2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.358099 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856"} err="failed to get container status \"2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856\": rpc error: code = NotFound desc = could not find container \"2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856\": container with ID starting with 2e16729580c66517d9d5966e48a395c4012ecfce7201571df44ebfd4d6c8c856 not found: ID does not exist" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.492732 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:35:16 crc kubenswrapper[4982]: I0123 09:35:16.493431 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.080080 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x22p8"] Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.254680 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x22p8" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="registry-server" containerID="cri-o://220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4" gracePeriod=2 Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.511882 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.108:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.512197 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.108:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.809184 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.904022 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-catalog-content\") pod \"6a6bef32-d56e-4fb3-977d-452837e185a8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.906550 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2xlf\" (UniqueName: \"kubernetes.io/projected/6a6bef32-d56e-4fb3-977d-452837e185a8-kube-api-access-f2xlf\") pod \"6a6bef32-d56e-4fb3-977d-452837e185a8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.907448 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-utilities\") pod \"6a6bef32-d56e-4fb3-977d-452837e185a8\" (UID: \"6a6bef32-d56e-4fb3-977d-452837e185a8\") " Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.908137 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-utilities" (OuterVolumeSpecName: "utilities") pod "6a6bef32-d56e-4fb3-977d-452837e185a8" (UID: "6a6bef32-d56e-4fb3-977d-452837e185a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.909483 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.912144 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258efe66-cd66-4684-830a-1b801004d246" path="/var/lib/kubelet/pods/258efe66-cd66-4684-830a-1b801004d246/volumes" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.921889 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6bef32-d56e-4fb3-977d-452837e185a8-kube-api-access-f2xlf" (OuterVolumeSpecName: "kube-api-access-f2xlf") pod "6a6bef32-d56e-4fb3-977d-452837e185a8" (UID: "6a6bef32-d56e-4fb3-977d-452837e185a8"). InnerVolumeSpecName "kube-api-access-f2xlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:17 crc kubenswrapper[4982]: I0123 09:35:17.955795 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a6bef32-d56e-4fb3-977d-452837e185a8" (UID: "6a6bef32-d56e-4fb3-977d-452837e185a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.005083 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.011293 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2xlf\" (UniqueName: \"kubernetes.io/projected/6a6bef32-d56e-4fb3-977d-452837e185a8-kube-api-access-f2xlf\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.011330 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a6bef32-d56e-4fb3-977d-452837e185a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.113033 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce14684-fad6-4869-bf34-935ea9637b04-logs\") pod \"9ce14684-fad6-4869-bf34-935ea9637b04\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.113110 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-config-data\") pod \"9ce14684-fad6-4869-bf34-935ea9637b04\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.113183 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-combined-ca-bundle\") pod \"9ce14684-fad6-4869-bf34-935ea9637b04\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.113283 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86l26\" (UniqueName: \"kubernetes.io/projected/9ce14684-fad6-4869-bf34-935ea9637b04-kube-api-access-86l26\") pod \"9ce14684-fad6-4869-bf34-935ea9637b04\" (UID: \"9ce14684-fad6-4869-bf34-935ea9637b04\") " Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.113599 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce14684-fad6-4869-bf34-935ea9637b04-logs" (OuterVolumeSpecName: "logs") pod "9ce14684-fad6-4869-bf34-935ea9637b04" (UID: "9ce14684-fad6-4869-bf34-935ea9637b04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.114315 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce14684-fad6-4869-bf34-935ea9637b04-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.117352 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce14684-fad6-4869-bf34-935ea9637b04-kube-api-access-86l26" (OuterVolumeSpecName: "kube-api-access-86l26") pod "9ce14684-fad6-4869-bf34-935ea9637b04" (UID: "9ce14684-fad6-4869-bf34-935ea9637b04"). InnerVolumeSpecName "kube-api-access-86l26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.150894 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-config-data" (OuterVolumeSpecName: "config-data") pod "9ce14684-fad6-4869-bf34-935ea9637b04" (UID: "9ce14684-fad6-4869-bf34-935ea9637b04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.154033 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ce14684-fad6-4869-bf34-935ea9637b04" (UID: "9ce14684-fad6-4869-bf34-935ea9637b04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.216932 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.216977 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce14684-fad6-4869-bf34-935ea9637b04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.216996 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86l26\" (UniqueName: \"kubernetes.io/projected/9ce14684-fad6-4869-bf34-935ea9637b04-kube-api-access-86l26\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.265546 4982 generic.go:334] "Generic (PLEG): container finished" podID="9ce14684-fad6-4869-bf34-935ea9637b04" containerID="7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068" exitCode=0 Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.265624 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce14684-fad6-4869-bf34-935ea9637b04","Type":"ContainerDied","Data":"7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068"} Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.265638 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.265674 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ce14684-fad6-4869-bf34-935ea9637b04","Type":"ContainerDied","Data":"7527f5aeaec37f5760f89db33dc423c5882131e865754cd4fce3f3b189008863"} Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.265693 4982 scope.go:117] "RemoveContainer" containerID="7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.268084 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerID="220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4" exitCode=0 Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.268129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x22p8" event={"ID":"6a6bef32-d56e-4fb3-977d-452837e185a8","Type":"ContainerDied","Data":"220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4"} Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.268163 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x22p8" event={"ID":"6a6bef32-d56e-4fb3-977d-452837e185a8","Type":"ContainerDied","Data":"abb499387c5de84827c6b414fa2beca99d48250e477b924d4902eb20583d84a9"} Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.268242 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x22p8" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.292706 4982 scope.go:117] "RemoveContainer" containerID="cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.299926 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.321648 4982 scope.go:117] "RemoveContainer" containerID="7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.322274 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068\": container with ID starting with 7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068 not found: ID does not exist" containerID="7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.322320 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068"} err="failed to get container status \"7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068\": rpc error: code = NotFound desc = could not find container \"7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068\": container with ID starting with 7b5c61cf80b1853da83340b71b376dca6c0edb344c703e91b9faa2376a145068 not found: ID does not exist" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.322348 4982 scope.go:117] "RemoveContainer" containerID="cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.322617 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2\": container with ID starting with cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2 not found: ID does not exist" containerID="cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.322656 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2"} err="failed to get container status \"cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2\": rpc error: code = NotFound desc = could not find container \"cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2\": container with ID starting with cb7ea4745046230cb21213188a45a117b624d73d784358caa76436d2135eb8c2 not found: ID does not exist" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.322675 4982 scope.go:117] "RemoveContainer" containerID="220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.331809 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.354875 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x22p8"] Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.354965 4982 scope.go:117] "RemoveContainer" containerID="9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.394690 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x22p8"] Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.416726 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417286 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-log" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417324 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-log" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417352 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-api" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417362 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-api" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417373 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="extract-utilities" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417382 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="extract-utilities" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417402 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="registry-server" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417409 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="registry-server" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417430 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="extract-content" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417437 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="extract-content" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417450 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="registry-server" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417457 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="registry-server" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417475 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="extract-utilities" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417482 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="extract-utilities" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.417494 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="extract-content" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417501 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="extract-content" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417729 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" containerName="registry-server" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417769 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-log" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417790 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="258efe66-cd66-4684-830a-1b801004d246" containerName="registry-server" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.417802 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" containerName="nova-api-api" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.418931 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.430331 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.445174 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.523961 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-config-data\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.524125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.524177 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-logs\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.524209 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9lx\" (UniqueName: \"kubernetes.io/projected/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-kube-api-access-6h9lx\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.543212 4982 scope.go:117] "RemoveContainer" containerID="8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.615714 4982 scope.go:117] "RemoveContainer" containerID="220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.616325 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4\": container with ID starting with 220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4 not found: ID does not exist" containerID="220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.616359 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4"} err="failed to get container status \"220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4\": rpc error: code = NotFound desc = could not find container \"220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4\": container with ID starting with 220694fda3082976027807859064b013880a02c180fd38a800df0eef0593a9b4 not found: ID does not exist" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.616384 4982 scope.go:117] "RemoveContainer" containerID="9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.616681 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806\": container with ID starting with 9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806 not found: ID does not exist" containerID="9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.616708 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806"} err="failed to get container status \"9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806\": rpc error: code = NotFound desc = could not find container \"9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806\": container with ID starting with 9869e81e8816ca87055870fe3821fbfd1851ddec2aa0e07a93550e815274d806 not found: ID does not exist" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.616727 4982 scope.go:117] "RemoveContainer" containerID="8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.617041 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974\": container with ID starting with 8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974 not found: ID does not exist" containerID="8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.617075 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974"} err="failed to get container status \"8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974\": rpc error: code = NotFound desc = could not find container \"8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974\": container with ID starting with 8ae1ec47445cdc693f62b37f5b7b871cf8f10a70b762acb9095a6737d5c31974 not found: ID does not exist" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.625419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-config-data\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.625561 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.625663 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-logs\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.625692 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h9lx\" (UniqueName: \"kubernetes.io/projected/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-kube-api-access-6h9lx\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.626087 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-logs\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.630408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-config-data\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.632282 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.648234 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h9lx\" (UniqueName: \"kubernetes.io/projected/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-kube-api-access-6h9lx\") pod \"nova-api-0\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " pod="openstack/nova-api-0" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.828498 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:35:18 crc kubenswrapper[4982]: E0123 09:35:18.829045 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:35:18 crc kubenswrapper[4982]: I0123 09:35:18.854869 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:19 crc kubenswrapper[4982]: I0123 09:35:19.280357 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9678e71-330e-4862-a9a6-c4280299fb1b" containerID="c7815a8ad19d285d91ab37c1bfc6c55f94a202430afe9edf5a62f02503d68606" exitCode=0 Jan 23 09:35:19 crc kubenswrapper[4982]: I0123 09:35:19.280431 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5mm5" event={"ID":"a9678e71-330e-4862-a9a6-c4280299fb1b","Type":"ContainerDied","Data":"c7815a8ad19d285d91ab37c1bfc6c55f94a202430afe9edf5a62f02503d68606"} Jan 23 09:35:19 crc kubenswrapper[4982]: W0123 09:35:19.321079 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93908cf0_c10e_454f_a2cb_c10cfa00d2a4.slice/crio-952d259bfc51df1a3da78e344dd12722f356e97a87d3e0e74550d288088860e3 WatchSource:0}: Error finding container 952d259bfc51df1a3da78e344dd12722f356e97a87d3e0e74550d288088860e3: Status 404 returned error can't find the container with id 952d259bfc51df1a3da78e344dd12722f356e97a87d3e0e74550d288088860e3 Jan 23 09:35:19 crc kubenswrapper[4982]: I0123 09:35:19.324784 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:19 crc kubenswrapper[4982]: I0123 09:35:19.838515 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6bef32-d56e-4fb3-977d-452837e185a8" path="/var/lib/kubelet/pods/6a6bef32-d56e-4fb3-977d-452837e185a8/volumes" Jan 23 09:35:19 crc kubenswrapper[4982]: I0123 09:35:19.839648 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce14684-fad6-4869-bf34-935ea9637b04" path="/var/lib/kubelet/pods/9ce14684-fad6-4869-bf34-935ea9637b04/volumes" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.297467 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93908cf0-c10e-454f-a2cb-c10cfa00d2a4","Type":"ContainerStarted","Data":"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73"} Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.297514 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93908cf0-c10e-454f-a2cb-c10cfa00d2a4","Type":"ContainerStarted","Data":"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea"} Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.297529 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93908cf0-c10e-454f-a2cb-c10cfa00d2a4","Type":"ContainerStarted","Data":"952d259bfc51df1a3da78e344dd12722f356e97a87d3e0e74550d288088860e3"} Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.324961 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.324940906 podStartE2EDuration="2.324940906s" podCreationTimestamp="2026-01-23 09:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:20.314839401 +0000 UTC m=+5994.795202777" watchObservedRunningTime="2026-01-23 09:35:20.324940906 +0000 UTC m=+5994.805304302" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.671783 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.772357 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-scripts\") pod \"a9678e71-330e-4862-a9a6-c4280299fb1b\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.772413 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pbn4\" (UniqueName: \"kubernetes.io/projected/a9678e71-330e-4862-a9a6-c4280299fb1b-kube-api-access-8pbn4\") pod \"a9678e71-330e-4862-a9a6-c4280299fb1b\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.772542 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-config-data\") pod \"a9678e71-330e-4862-a9a6-c4280299fb1b\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.772593 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-combined-ca-bundle\") pod \"a9678e71-330e-4862-a9a6-c4280299fb1b\" (UID: \"a9678e71-330e-4862-a9a6-c4280299fb1b\") " Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.780496 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-scripts" (OuterVolumeSpecName: "scripts") pod "a9678e71-330e-4862-a9a6-c4280299fb1b" (UID: "a9678e71-330e-4862-a9a6-c4280299fb1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.784997 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9678e71-330e-4862-a9a6-c4280299fb1b-kube-api-access-8pbn4" (OuterVolumeSpecName: "kube-api-access-8pbn4") pod "a9678e71-330e-4862-a9a6-c4280299fb1b" (UID: "a9678e71-330e-4862-a9a6-c4280299fb1b"). InnerVolumeSpecName "kube-api-access-8pbn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.812661 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-config-data" (OuterVolumeSpecName: "config-data") pod "a9678e71-330e-4862-a9a6-c4280299fb1b" (UID: "a9678e71-330e-4862-a9a6-c4280299fb1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.817582 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9678e71-330e-4862-a9a6-c4280299fb1b" (UID: "a9678e71-330e-4862-a9a6-c4280299fb1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.875411 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.875452 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pbn4\" (UniqueName: \"kubernetes.io/projected/a9678e71-330e-4862-a9a6-c4280299fb1b-kube-api-access-8pbn4\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.875465 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:20 crc kubenswrapper[4982]: I0123 09:35:20.875476 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9678e71-330e-4862-a9a6-c4280299fb1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:21 crc kubenswrapper[4982]: I0123 09:35:21.312086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5mm5" event={"ID":"a9678e71-330e-4862-a9a6-c4280299fb1b","Type":"ContainerDied","Data":"59adeae3289809cb02975111d1c582773ddc68a498dc2abdf94d5602855359ec"} Jan 23 09:35:21 crc kubenswrapper[4982]: I0123 09:35:21.312149 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59adeae3289809cb02975111d1c582773ddc68a498dc2abdf94d5602855359ec" Jan 23 09:35:21 crc kubenswrapper[4982]: I0123 09:35:21.312171 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5mm5" Jan 23 09:35:21 crc kubenswrapper[4982]: I0123 09:35:21.502368 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:21 crc kubenswrapper[4982]: I0123 09:35:21.528028 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:21 crc kubenswrapper[4982]: I0123 09:35:21.528275 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-log" containerID="cri-o://2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe" gracePeriod=30 Jan 23 09:35:21 crc kubenswrapper[4982]: I0123 09:35:21.528468 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-metadata" containerID="cri-o://a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac" gracePeriod=30 Jan 23 09:35:22 crc kubenswrapper[4982]: I0123 09:35:22.323152 4982 generic.go:334] "Generic (PLEG): container finished" podID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerID="2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe" exitCode=143 Jan 23 09:35:22 crc kubenswrapper[4982]: I0123 09:35:22.323205 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98163aaa-b1a2-4388-bfcb-ec922843a523","Type":"ContainerDied","Data":"2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe"} Jan 23 09:35:22 crc kubenswrapper[4982]: I0123 09:35:22.323733 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-api" containerID="cri-o://7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73" gracePeriod=30 Jan 23 09:35:22 crc kubenswrapper[4982]: I0123 09:35:22.323844 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-log" containerID="cri-o://604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea" gracePeriod=30 Jan 23 09:35:22 crc kubenswrapper[4982]: I0123 09:35:22.976218 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.120920 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-logs\") pod \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.121011 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-config-data\") pod \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.121214 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-combined-ca-bundle\") pod \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.121315 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h9lx\" (UniqueName: \"kubernetes.io/projected/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-kube-api-access-6h9lx\") pod \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\" (UID: \"93908cf0-c10e-454f-a2cb-c10cfa00d2a4\") " Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.122204 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-logs" (OuterVolumeSpecName: "logs") pod "93908cf0-c10e-454f-a2cb-c10cfa00d2a4" (UID: "93908cf0-c10e-454f-a2cb-c10cfa00d2a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.129944 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-kube-api-access-6h9lx" (OuterVolumeSpecName: "kube-api-access-6h9lx") pod "93908cf0-c10e-454f-a2cb-c10cfa00d2a4" (UID: "93908cf0-c10e-454f-a2cb-c10cfa00d2a4"). InnerVolumeSpecName "kube-api-access-6h9lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.164657 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93908cf0-c10e-454f-a2cb-c10cfa00d2a4" (UID: "93908cf0-c10e-454f-a2cb-c10cfa00d2a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.171341 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-config-data" (OuterVolumeSpecName: "config-data") pod "93908cf0-c10e-454f-a2cb-c10cfa00d2a4" (UID: "93908cf0-c10e-454f-a2cb-c10cfa00d2a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.225301 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.225363 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h9lx\" (UniqueName: \"kubernetes.io/projected/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-kube-api-access-6h9lx\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.225381 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.225396 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93908cf0-c10e-454f-a2cb-c10cfa00d2a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.340617 4982 generic.go:334] "Generic (PLEG): container finished" podID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerID="7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73" exitCode=0 Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.340672 4982 generic.go:334] "Generic (PLEG): container finished" podID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerID="604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea" exitCode=143 Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.340698 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.340699 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93908cf0-c10e-454f-a2cb-c10cfa00d2a4","Type":"ContainerDied","Data":"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73"} Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.340875 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93908cf0-c10e-454f-a2cb-c10cfa00d2a4","Type":"ContainerDied","Data":"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea"} Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.340920 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93908cf0-c10e-454f-a2cb-c10cfa00d2a4","Type":"ContainerDied","Data":"952d259bfc51df1a3da78e344dd12722f356e97a87d3e0e74550d288088860e3"} Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.340947 4982 scope.go:117] "RemoveContainer" containerID="7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.381853 4982 scope.go:117] "RemoveContainer" containerID="604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.395046 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.410258 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.423774 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:23 crc kubenswrapper[4982]: E0123 09:35:23.424258 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-api" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.424277 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-api" Jan 23 09:35:23 crc kubenswrapper[4982]: E0123 09:35:23.424292 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-log" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.424299 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-log" Jan 23 09:35:23 crc kubenswrapper[4982]: E0123 09:35:23.424312 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9678e71-330e-4862-a9a6-c4280299fb1b" containerName="nova-manage" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.424318 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9678e71-330e-4862-a9a6-c4280299fb1b" containerName="nova-manage" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.424546 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9678e71-330e-4862-a9a6-c4280299fb1b" containerName="nova-manage" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.424568 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-api" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.424588 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" containerName="nova-api-log" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.425681 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.431545 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.433818 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.451198 4982 scope.go:117] "RemoveContainer" containerID="7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73" Jan 23 09:35:23 crc kubenswrapper[4982]: E0123 09:35:23.452992 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73\": container with ID starting with 7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73 not found: ID does not exist" containerID="7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.453042 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73"} err="failed to get container status \"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73\": rpc error: code = NotFound desc = could not find container \"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73\": container with ID starting with 7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73 not found: ID does not exist" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.453101 4982 scope.go:117] "RemoveContainer" containerID="604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea" Jan 23 09:35:23 crc kubenswrapper[4982]: E0123 09:35:23.453579 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea\": container with ID starting with 604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea not found: ID does not exist" containerID="604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.453597 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea"} err="failed to get container status \"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea\": rpc error: code = NotFound desc = could not find container \"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea\": container with ID starting with 604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea not found: ID does not exist" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.453610 4982 scope.go:117] "RemoveContainer" containerID="7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.455560 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73"} err="failed to get container status \"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73\": rpc error: code = NotFound desc = could not find container \"7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73\": container with ID starting with 7bf7e3b52619fc736fdbd560cc541afc07aede66b9b4bc9229c17116c524bf73 not found: ID does not exist" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.455596 4982 scope.go:117] "RemoveContainer" containerID="604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.457258 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea"} err="failed to get container status \"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea\": rpc error: code = NotFound desc = could not find container \"604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea\": container with ID starting with 604060f3f9c90a0351731d229cdf5597678c3c1827e34b8ecdabaf6686c150ea not found: ID does not exist" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.532286 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-config-data\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.532733 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nswkn\" (UniqueName: \"kubernetes.io/projected/e43d545a-c8eb-4b88-83e3-21c842e35ab9-kube-api-access-nswkn\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.533063 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43d545a-c8eb-4b88-83e3-21c842e35ab9-logs\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.533325 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.635672 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nswkn\" (UniqueName: \"kubernetes.io/projected/e43d545a-c8eb-4b88-83e3-21c842e35ab9-kube-api-access-nswkn\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.636006 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43d545a-c8eb-4b88-83e3-21c842e35ab9-logs\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.636055 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.636115 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-config-data\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.636695 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43d545a-c8eb-4b88-83e3-21c842e35ab9-logs\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.640006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-config-data\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.654238 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nswkn\" (UniqueName: \"kubernetes.io/projected/e43d545a-c8eb-4b88-83e3-21c842e35ab9-kube-api-access-nswkn\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.658045 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.762916 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:23 crc kubenswrapper[4982]: I0123 09:35:23.841714 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93908cf0-c10e-454f-a2cb-c10cfa00d2a4" path="/var/lib/kubelet/pods/93908cf0-c10e-454f-a2cb-c10cfa00d2a4/volumes" Jan 23 09:35:24 crc kubenswrapper[4982]: I0123 09:35:24.233673 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:24 crc kubenswrapper[4982]: W0123 09:35:24.234836 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode43d545a_c8eb_4b88_83e3_21c842e35ab9.slice/crio-f5915b549c821668d58c0d616da82acc8aafa2a2cfb185e8e06c7750a378c431 WatchSource:0}: Error finding container f5915b549c821668d58c0d616da82acc8aafa2a2cfb185e8e06c7750a378c431: Status 404 returned error can't find the container with id f5915b549c821668d58c0d616da82acc8aafa2a2cfb185e8e06c7750a378c431 Jan 23 09:35:24 crc kubenswrapper[4982]: I0123 09:35:24.358652 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e43d545a-c8eb-4b88-83e3-21c842e35ab9","Type":"ContainerStarted","Data":"f5915b549c821668d58c0d616da82acc8aafa2a2cfb185e8e06c7750a378c431"} Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.115559 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.271786 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/98163aaa-b1a2-4388-bfcb-ec922843a523-kube-api-access-qhx84\") pod \"98163aaa-b1a2-4388-bfcb-ec922843a523\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.271970 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98163aaa-b1a2-4388-bfcb-ec922843a523-logs\") pod \"98163aaa-b1a2-4388-bfcb-ec922843a523\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.272116 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-combined-ca-bundle\") pod \"98163aaa-b1a2-4388-bfcb-ec922843a523\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.272161 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-nova-metadata-tls-certs\") pod \"98163aaa-b1a2-4388-bfcb-ec922843a523\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.272182 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-config-data\") pod \"98163aaa-b1a2-4388-bfcb-ec922843a523\" (UID: \"98163aaa-b1a2-4388-bfcb-ec922843a523\") " Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.272899 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98163aaa-b1a2-4388-bfcb-ec922843a523-logs" (OuterVolumeSpecName: "logs") pod "98163aaa-b1a2-4388-bfcb-ec922843a523" (UID: "98163aaa-b1a2-4388-bfcb-ec922843a523"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.277602 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98163aaa-b1a2-4388-bfcb-ec922843a523-kube-api-access-qhx84" (OuterVolumeSpecName: "kube-api-access-qhx84") pod "98163aaa-b1a2-4388-bfcb-ec922843a523" (UID: "98163aaa-b1a2-4388-bfcb-ec922843a523"). InnerVolumeSpecName "kube-api-access-qhx84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.305308 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98163aaa-b1a2-4388-bfcb-ec922843a523" (UID: "98163aaa-b1a2-4388-bfcb-ec922843a523"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.309294 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-config-data" (OuterVolumeSpecName: "config-data") pod "98163aaa-b1a2-4388-bfcb-ec922843a523" (UID: "98163aaa-b1a2-4388-bfcb-ec922843a523"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.340009 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "98163aaa-b1a2-4388-bfcb-ec922843a523" (UID: "98163aaa-b1a2-4388-bfcb-ec922843a523"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.371166 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e43d545a-c8eb-4b88-83e3-21c842e35ab9","Type":"ContainerStarted","Data":"6f57af5f1488eb91f00153a0f54b9e30b1211980408d8ef9182f7f4d7a52492f"} Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.371236 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e43d545a-c8eb-4b88-83e3-21c842e35ab9","Type":"ContainerStarted","Data":"992b59e8f608eb771a6c8e17fdb663991fd5bf5acda56113a2476df76b5d8266"} Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.375240 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98163aaa-b1a2-4388-bfcb-ec922843a523-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.375270 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.375281 4982 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.375291 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98163aaa-b1a2-4388-bfcb-ec922843a523-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.375300 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhx84\" (UniqueName: \"kubernetes.io/projected/98163aaa-b1a2-4388-bfcb-ec922843a523-kube-api-access-qhx84\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.376984 4982 generic.go:334] "Generic (PLEG): container finished" podID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerID="a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac" exitCode=0 Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.377030 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98163aaa-b1a2-4388-bfcb-ec922843a523","Type":"ContainerDied","Data":"a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac"} Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.377062 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"98163aaa-b1a2-4388-bfcb-ec922843a523","Type":"ContainerDied","Data":"be54e344c3f53a7cd042f796e19c4ef665782b1ce1ecf67fc65bc61e4876cc4e"} Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.377081 4982 scope.go:117] "RemoveContainer" containerID="a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.377142 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.394559 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.394537179 podStartE2EDuration="2.394537179s" podCreationTimestamp="2026-01-23 09:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:25.390328835 +0000 UTC m=+5999.870692211" watchObservedRunningTime="2026-01-23 09:35:25.394537179 +0000 UTC m=+5999.874900585" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.414226 4982 scope.go:117] "RemoveContainer" containerID="2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.428767 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.441183 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.454200 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:25 crc kubenswrapper[4982]: E0123 09:35:25.454929 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-log" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.454948 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-log" Jan 23 09:35:25 crc kubenswrapper[4982]: E0123 09:35:25.454963 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-metadata" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.454970 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-metadata" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.455183 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-log" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.455196 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" containerName="nova-metadata-metadata" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.456360 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.459005 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.464617 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.481226 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.483012 4982 scope.go:117] "RemoveContainer" containerID="a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac" Jan 23 09:35:25 crc kubenswrapper[4982]: E0123 09:35:25.484556 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac\": container with ID starting with a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac not found: ID does not exist" containerID="a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.484595 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac"} err="failed to get container status \"a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac\": rpc error: code = NotFound desc = could not find container \"a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac\": container with ID starting with a1ffdafaea710e29db412ec9b323eb1661985874c06d82638bbc1dc6e1021eac not found: ID does not exist" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.484663 4982 scope.go:117] "RemoveContainer" containerID="2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe" Jan 23 09:35:25 crc kubenswrapper[4982]: E0123 09:35:25.485185 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe\": container with ID starting with 2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe not found: ID does not exist" containerID="2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.485220 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe"} err="failed to get container status \"2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe\": rpc error: code = NotFound desc = could not find container \"2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe\": container with ID starting with 2f568850cc2513cc746180f5154baa791c93e96fd0378128bd39e813afeffcbe not found: ID does not exist" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.581178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.581717 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8bg\" (UniqueName: \"kubernetes.io/projected/50482909-3a21-4256-845b-3967c00bcf1b-kube-api-access-zm8bg\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.581870 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-config-data\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.581966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.582067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50482909-3a21-4256-845b-3967c00bcf1b-logs\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.683111 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.683164 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50482909-3a21-4256-845b-3967c00bcf1b-logs\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.683253 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.683273 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8bg\" (UniqueName: \"kubernetes.io/projected/50482909-3a21-4256-845b-3967c00bcf1b-kube-api-access-zm8bg\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.683326 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-config-data\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.683568 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50482909-3a21-4256-845b-3967c00bcf1b-logs\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.686805 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.686904 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.687147 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50482909-3a21-4256-845b-3967c00bcf1b-config-data\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.698569 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8bg\" (UniqueName: \"kubernetes.io/projected/50482909-3a21-4256-845b-3967c00bcf1b-kube-api-access-zm8bg\") pod \"nova-metadata-0\" (UID: \"50482909-3a21-4256-845b-3967c00bcf1b\") " pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.782148 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 09:35:25 crc kubenswrapper[4982]: I0123 09:35:25.840016 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98163aaa-b1a2-4388-bfcb-ec922843a523" path="/var/lib/kubelet/pods/98163aaa-b1a2-4388-bfcb-ec922843a523/volumes" Jan 23 09:35:26 crc kubenswrapper[4982]: I0123 09:35:26.088914 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 09:35:26 crc kubenswrapper[4982]: I0123 09:35:26.394484 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50482909-3a21-4256-845b-3967c00bcf1b","Type":"ContainerStarted","Data":"c76cc0a6087707281c7a83e16ac6464618276bd6c767903e46e9ae19d18d5b60"} Jan 23 09:35:26 crc kubenswrapper[4982]: I0123 09:35:26.394522 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50482909-3a21-4256-845b-3967c00bcf1b","Type":"ContainerStarted","Data":"638061d0315cb60ff017b8e3fb4f5bac8069e0bb0c86f967af9325b42a1e2c77"} Jan 23 09:35:27 crc kubenswrapper[4982]: I0123 09:35:27.415557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50482909-3a21-4256-845b-3967c00bcf1b","Type":"ContainerStarted","Data":"3db0e5c0cce69fbb26cbc50336eef8c06475140ea3885600831f076200768341"} Jan 23 09:35:27 crc kubenswrapper[4982]: I0123 09:35:27.438899 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.438874725 podStartE2EDuration="2.438874725s" podCreationTimestamp="2026-01-23 09:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:27.437998061 +0000 UTC m=+6001.918361447" watchObservedRunningTime="2026-01-23 09:35:27.438874725 +0000 UTC m=+6001.919238141" Jan 23 09:35:30 crc kubenswrapper[4982]: I0123 09:35:30.782760 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:30 crc kubenswrapper[4982]: I0123 09:35:30.783253 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 09:35:32 crc kubenswrapper[4982]: I0123 09:35:32.828470 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:35:32 crc kubenswrapper[4982]: E0123 09:35:32.829073 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.475202 4982 generic.go:334] "Generic (PLEG): container finished" podID="02c3dd55-c179-4b2f-acfd-97f8150bba0f" containerID="07256719b9e57c70f2664360d43994f6b6e4aca95099e2b0afabd7d1a167f4f3" exitCode=137 Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.475278 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02c3dd55-c179-4b2f-acfd-97f8150bba0f","Type":"ContainerDied","Data":"07256719b9e57c70f2664360d43994f6b6e4aca95099e2b0afabd7d1a167f4f3"} Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.475905 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"02c3dd55-c179-4b2f-acfd-97f8150bba0f","Type":"ContainerDied","Data":"a9876cfb9cd3b76a208456671a3b548ac57fc22cb4ad71aa3bcdf29ff09ebd7d"} Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.476055 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9876cfb9cd3b76a208456671a3b548ac57fc22cb4ad71aa3bcdf29ff09ebd7d" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.509147 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.650450 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-config-data\") pod \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.650599 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-combined-ca-bundle\") pod \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.650769 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfg7z\" (UniqueName: \"kubernetes.io/projected/02c3dd55-c179-4b2f-acfd-97f8150bba0f-kube-api-access-rfg7z\") pod \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\" (UID: \"02c3dd55-c179-4b2f-acfd-97f8150bba0f\") " Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.657945 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c3dd55-c179-4b2f-acfd-97f8150bba0f-kube-api-access-rfg7z" (OuterVolumeSpecName: "kube-api-access-rfg7z") pod "02c3dd55-c179-4b2f-acfd-97f8150bba0f" (UID: "02c3dd55-c179-4b2f-acfd-97f8150bba0f"). InnerVolumeSpecName "kube-api-access-rfg7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.683696 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02c3dd55-c179-4b2f-acfd-97f8150bba0f" (UID: "02c3dd55-c179-4b2f-acfd-97f8150bba0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.701086 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-config-data" (OuterVolumeSpecName: "config-data") pod "02c3dd55-c179-4b2f-acfd-97f8150bba0f" (UID: "02c3dd55-c179-4b2f-acfd-97f8150bba0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.755752 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.755940 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c3dd55-c179-4b2f-acfd-97f8150bba0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.756033 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfg7z\" (UniqueName: \"kubernetes.io/projected/02c3dd55-c179-4b2f-acfd-97f8150bba0f-kube-api-access-rfg7z\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.764005 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:33 crc kubenswrapper[4982]: I0123 09:35:33.764054 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.483894 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.516078 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.539815 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.552600 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:34 crc kubenswrapper[4982]: E0123 09:35:34.553395 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c3dd55-c179-4b2f-acfd-97f8150bba0f" containerName="nova-scheduler-scheduler" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.553413 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c3dd55-c179-4b2f-acfd-97f8150bba0f" containerName="nova-scheduler-scheduler" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.553728 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c3dd55-c179-4b2f-acfd-97f8150bba0f" containerName="nova-scheduler-scheduler" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.556378 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.559891 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.564612 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.680967 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac284cc-3278-42cd-9a34-466f7dd97387-config-data\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.681026 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac284cc-3278-42cd-9a34-466f7dd97387-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.681049 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8q4\" (UniqueName: \"kubernetes.io/projected/9ac284cc-3278-42cd-9a34-466f7dd97387-kube-api-access-9c8q4\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.783728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac284cc-3278-42cd-9a34-466f7dd97387-config-data\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.783780 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac284cc-3278-42cd-9a34-466f7dd97387-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.783807 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8q4\" (UniqueName: \"kubernetes.io/projected/9ac284cc-3278-42cd-9a34-466f7dd97387-kube-api-access-9c8q4\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.790946 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac284cc-3278-42cd-9a34-466f7dd97387-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.806270 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac284cc-3278-42cd-9a34-466f7dd97387-config-data\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.809275 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8q4\" (UniqueName: \"kubernetes.io/projected/9ac284cc-3278-42cd-9a34-466f7dd97387-kube-api-access-9c8q4\") pod \"nova-scheduler-0\" (UID: \"9ac284cc-3278-42cd-9a34-466f7dd97387\") " pod="openstack/nova-scheduler-0" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.847948 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.111:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.848030 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.111:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:34 crc kubenswrapper[4982]: I0123 09:35:34.886388 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 09:35:35 crc kubenswrapper[4982]: I0123 09:35:35.386577 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 09:35:35 crc kubenswrapper[4982]: I0123 09:35:35.493927 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ac284cc-3278-42cd-9a34-466f7dd97387","Type":"ContainerStarted","Data":"75a7fe2b21dc8c54cf1593aa5f384e1c23b90306dd601952f42cd636e7f71cef"} Jan 23 09:35:35 crc kubenswrapper[4982]: I0123 09:35:35.782414 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:35:35 crc kubenswrapper[4982]: I0123 09:35:35.782498 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 09:35:35 crc kubenswrapper[4982]: I0123 09:35:35.842840 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c3dd55-c179-4b2f-acfd-97f8150bba0f" path="/var/lib/kubelet/pods/02c3dd55-c179-4b2f-acfd-97f8150bba0f/volumes" Jan 23 09:35:36 crc kubenswrapper[4982]: I0123 09:35:36.504388 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ac284cc-3278-42cd-9a34-466f7dd97387","Type":"ContainerStarted","Data":"6366620640c717dff1962cc46f7993923f710ebba9cce130ed9df449cb0b1a34"} Jan 23 09:35:36 crc kubenswrapper[4982]: I0123 09:35:36.528882 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.5288627740000003 podStartE2EDuration="2.528862774s" podCreationTimestamp="2026-01-23 09:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:36.521903695 +0000 UTC m=+6011.002267081" watchObservedRunningTime="2026-01-23 09:35:36.528862774 +0000 UTC m=+6011.009226150" Jan 23 09:35:36 crc kubenswrapper[4982]: I0123 09:35:36.796763 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="50482909-3a21-4256-845b-3967c00bcf1b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.112:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:36 crc kubenswrapper[4982]: I0123 09:35:36.796763 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="50482909-3a21-4256-845b-3967c00bcf1b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.112:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:35:39 crc kubenswrapper[4982]: I0123 09:35:39.887488 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 09:35:43 crc kubenswrapper[4982]: I0123 09:35:43.766958 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:35:43 crc kubenswrapper[4982]: I0123 09:35:43.767600 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:35:43 crc kubenswrapper[4982]: I0123 09:35:43.769305 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:35:43 crc kubenswrapper[4982]: I0123 09:35:43.770106 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.596653 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.600437 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.810360 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd69b9bd9-qm8zr"] Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.814593 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.864678 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd69b9bd9-qm8zr"] Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.886995 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.915966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-dns-svc\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.916015 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.916062 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-config\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.916093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmprd\" (UniqueName: \"kubernetes.io/projected/f44af539-af03-49f7-afad-cf5f221f620d-kube-api-access-lmprd\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.916217 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-sb\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:44 crc kubenswrapper[4982]: I0123 09:35:44.933426 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.018090 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-config\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.018157 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmprd\" (UniqueName: \"kubernetes.io/projected/f44af539-af03-49f7-afad-cf5f221f620d-kube-api-access-lmprd\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.018302 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-sb\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.018382 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-dns-svc\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.018408 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.019806 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-config\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.019938 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-sb\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.020320 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.020440 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-dns-svc\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.045696 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmprd\" (UniqueName: \"kubernetes.io/projected/f44af539-af03-49f7-afad-cf5f221f620d-kube-api-access-lmprd\") pod \"dnsmasq-dns-7fd69b9bd9-qm8zr\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.161234 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.543667 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd69b9bd9-qm8zr"] Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.613475 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" event={"ID":"f44af539-af03-49f7-afad-cf5f221f620d","Type":"ContainerStarted","Data":"6600155272b628898e015fdfa42893a3fa91eeef58e349c4edf0cd48a219cea6"} Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.671416 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.790466 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.792302 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 09:35:45 crc kubenswrapper[4982]: I0123 09:35:45.804716 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 09:35:46 crc kubenswrapper[4982]: I0123 09:35:46.649840 4982 generic.go:334] "Generic (PLEG): container finished" podID="f44af539-af03-49f7-afad-cf5f221f620d" containerID="d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9" exitCode=0 Jan 23 09:35:46 crc kubenswrapper[4982]: I0123 09:35:46.650068 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" event={"ID":"f44af539-af03-49f7-afad-cf5f221f620d","Type":"ContainerDied","Data":"d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9"} Jan 23 09:35:46 crc kubenswrapper[4982]: I0123 09:35:46.666866 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 09:35:46 crc kubenswrapper[4982]: I0123 09:35:46.828011 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:35:46 crc kubenswrapper[4982]: E0123 09:35:46.828340 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:35:47 crc kubenswrapper[4982]: I0123 09:35:47.665499 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" event={"ID":"f44af539-af03-49f7-afad-cf5f221f620d","Type":"ContainerStarted","Data":"20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356"} Jan 23 09:35:47 crc kubenswrapper[4982]: I0123 09:35:47.694736 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" podStartSLOduration=3.694718306 podStartE2EDuration="3.694718306s" podCreationTimestamp="2026-01-23 09:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:47.685946118 +0000 UTC m=+6022.166309504" watchObservedRunningTime="2026-01-23 09:35:47.694718306 +0000 UTC m=+6022.175081692" Jan 23 09:35:48 crc kubenswrapper[4982]: I0123 09:35:48.390291 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:48 crc kubenswrapper[4982]: I0123 09:35:48.390506 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-log" containerID="cri-o://992b59e8f608eb771a6c8e17fdb663991fd5bf5acda56113a2476df76b5d8266" gracePeriod=30 Jan 23 09:35:48 crc kubenswrapper[4982]: I0123 09:35:48.390661 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-api" containerID="cri-o://6f57af5f1488eb91f00153a0f54b9e30b1211980408d8ef9182f7f4d7a52492f" gracePeriod=30 Jan 23 09:35:48 crc kubenswrapper[4982]: I0123 09:35:48.677085 4982 generic.go:334] "Generic (PLEG): container finished" podID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerID="992b59e8f608eb771a6c8e17fdb663991fd5bf5acda56113a2476df76b5d8266" exitCode=143 Jan 23 09:35:48 crc kubenswrapper[4982]: I0123 09:35:48.677175 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e43d545a-c8eb-4b88-83e3-21c842e35ab9","Type":"ContainerDied","Data":"992b59e8f608eb771a6c8e17fdb663991fd5bf5acda56113a2476df76b5d8266"} Jan 23 09:35:48 crc kubenswrapper[4982]: I0123 09:35:48.677451 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:51 crc kubenswrapper[4982]: I0123 09:35:51.708062 4982 generic.go:334] "Generic (PLEG): container finished" podID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerID="6f57af5f1488eb91f00153a0f54b9e30b1211980408d8ef9182f7f4d7a52492f" exitCode=0 Jan 23 09:35:51 crc kubenswrapper[4982]: I0123 09:35:51.708278 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e43d545a-c8eb-4b88-83e3-21c842e35ab9","Type":"ContainerDied","Data":"6f57af5f1488eb91f00153a0f54b9e30b1211980408d8ef9182f7f4d7a52492f"} Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.005352 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.069577 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43d545a-c8eb-4b88-83e3-21c842e35ab9-logs\") pod \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.069713 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-config-data\") pod \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.069742 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-combined-ca-bundle\") pod \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.069840 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nswkn\" (UniqueName: \"kubernetes.io/projected/e43d545a-c8eb-4b88-83e3-21c842e35ab9-kube-api-access-nswkn\") pod \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\" (UID: \"e43d545a-c8eb-4b88-83e3-21c842e35ab9\") " Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.070224 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43d545a-c8eb-4b88-83e3-21c842e35ab9-logs" (OuterVolumeSpecName: "logs") pod "e43d545a-c8eb-4b88-83e3-21c842e35ab9" (UID: "e43d545a-c8eb-4b88-83e3-21c842e35ab9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.083406 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43d545a-c8eb-4b88-83e3-21c842e35ab9-kube-api-access-nswkn" (OuterVolumeSpecName: "kube-api-access-nswkn") pod "e43d545a-c8eb-4b88-83e3-21c842e35ab9" (UID: "e43d545a-c8eb-4b88-83e3-21c842e35ab9"). InnerVolumeSpecName "kube-api-access-nswkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.121191 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e43d545a-c8eb-4b88-83e3-21c842e35ab9" (UID: "e43d545a-c8eb-4b88-83e3-21c842e35ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.129444 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-config-data" (OuterVolumeSpecName: "config-data") pod "e43d545a-c8eb-4b88-83e3-21c842e35ab9" (UID: "e43d545a-c8eb-4b88-83e3-21c842e35ab9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.173188 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e43d545a-c8eb-4b88-83e3-21c842e35ab9-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.173555 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.173736 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e43d545a-c8eb-4b88-83e3-21c842e35ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.173893 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nswkn\" (UniqueName: \"kubernetes.io/projected/e43d545a-c8eb-4b88-83e3-21c842e35ab9-kube-api-access-nswkn\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.720029 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e43d545a-c8eb-4b88-83e3-21c842e35ab9","Type":"ContainerDied","Data":"f5915b549c821668d58c0d616da82acc8aafa2a2cfb185e8e06c7750a378c431"} Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.720081 4982 scope.go:117] "RemoveContainer" containerID="6f57af5f1488eb91f00153a0f54b9e30b1211980408d8ef9182f7f4d7a52492f" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.720106 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.762369 4982 scope.go:117] "RemoveContainer" containerID="992b59e8f608eb771a6c8e17fdb663991fd5bf5acda56113a2476df76b5d8266" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.763768 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.789400 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.804019 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:52 crc kubenswrapper[4982]: E0123 09:35:52.804492 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-api" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.804510 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-api" Jan 23 09:35:52 crc kubenswrapper[4982]: E0123 09:35:52.804524 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-log" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.804530 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-log" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.804834 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-api" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.804867 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" containerName="nova-api-log" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.806108 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.809191 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.809545 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.809550 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.819615 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.888038 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-public-tls-certs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.888093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0338c87c-5dfe-406d-adde-089b0e43220f-logs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.888125 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqg78\" (UniqueName: \"kubernetes.io/projected/0338c87c-5dfe-406d-adde-089b0e43220f-kube-api-access-zqg78\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.888156 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-config-data\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.888222 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.888277 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.990089 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-public-tls-certs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.990146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0338c87c-5dfe-406d-adde-089b0e43220f-logs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.990172 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqg78\" (UniqueName: \"kubernetes.io/projected/0338c87c-5dfe-406d-adde-089b0e43220f-kube-api-access-zqg78\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.990200 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-config-data\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.990262 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.990310 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.990842 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0338c87c-5dfe-406d-adde-089b0e43220f-logs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.994824 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.995681 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-config-data\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:52 crc kubenswrapper[4982]: I0123 09:35:52.995611 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:53 crc kubenswrapper[4982]: I0123 09:35:53.002072 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0338c87c-5dfe-406d-adde-089b0e43220f-public-tls-certs\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:53 crc kubenswrapper[4982]: I0123 09:35:53.014756 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqg78\" (UniqueName: \"kubernetes.io/projected/0338c87c-5dfe-406d-adde-089b0e43220f-kube-api-access-zqg78\") pod \"nova-api-0\" (UID: \"0338c87c-5dfe-406d-adde-089b0e43220f\") " pod="openstack/nova-api-0" Jan 23 09:35:53 crc kubenswrapper[4982]: I0123 09:35:53.134343 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 09:35:53 crc kubenswrapper[4982]: I0123 09:35:53.602171 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 09:35:53 crc kubenswrapper[4982]: I0123 09:35:53.735611 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0338c87c-5dfe-406d-adde-089b0e43220f","Type":"ContainerStarted","Data":"638b454676d3cdad081cfd5a8de7a04d819a2d9649f6fe3585e24d2fffe877a2"} Jan 23 09:35:53 crc kubenswrapper[4982]: I0123 09:35:53.840195 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43d545a-c8eb-4b88-83e3-21c842e35ab9" path="/var/lib/kubelet/pods/e43d545a-c8eb-4b88-83e3-21c842e35ab9/volumes" Jan 23 09:35:54 crc kubenswrapper[4982]: I0123 09:35:54.771723 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0338c87c-5dfe-406d-adde-089b0e43220f","Type":"ContainerStarted","Data":"b40570e4cc74cd55f83567f3d5245624bf4338c2d9d14f1ae3768960ca047bbc"} Jan 23 09:35:54 crc kubenswrapper[4982]: I0123 09:35:54.772113 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0338c87c-5dfe-406d-adde-089b0e43220f","Type":"ContainerStarted","Data":"9f09ae48462feab67da30989c58b49a56f67f995a5d20e92af4d86220fec11b2"} Jan 23 09:35:54 crc kubenswrapper[4982]: I0123 09:35:54.812845 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.81282568 podStartE2EDuration="2.81282568s" podCreationTimestamp="2026-01-23 09:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:35:54.809658024 +0000 UTC m=+6029.290021430" watchObservedRunningTime="2026-01-23 09:35:54.81282568 +0000 UTC m=+6029.293189066" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.163076 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.237015 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b588dccf-tc4xx"] Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.237247 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" podUID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerName="dnsmasq-dns" containerID="cri-o://c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde" gracePeriod=10 Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.768465 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.786905 4982 generic.go:334] "Generic (PLEG): container finished" podID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerID="c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde" exitCode=0 Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.787004 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.786998 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" event={"ID":"9d700614-24d2-4917-bfba-1fd08e4e62c6","Type":"ContainerDied","Data":"c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde"} Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.787059 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b588dccf-tc4xx" event={"ID":"9d700614-24d2-4917-bfba-1fd08e4e62c6","Type":"ContainerDied","Data":"e0e777f97b9ad7336ebda96f912493326d61fc3cf50b2e23048d7445fb89d71e"} Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.787080 4982 scope.go:117] "RemoveContainer" containerID="c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.815382 4982 scope.go:117] "RemoveContainer" containerID="d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.848302 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27njj\" (UniqueName: \"kubernetes.io/projected/9d700614-24d2-4917-bfba-1fd08e4e62c6-kube-api-access-27njj\") pod \"9d700614-24d2-4917-bfba-1fd08e4e62c6\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.848403 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-config\") pod \"9d700614-24d2-4917-bfba-1fd08e4e62c6\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.848463 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-nb\") pod \"9d700614-24d2-4917-bfba-1fd08e4e62c6\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.848729 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-sb\") pod \"9d700614-24d2-4917-bfba-1fd08e4e62c6\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.848754 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-dns-svc\") pod \"9d700614-24d2-4917-bfba-1fd08e4e62c6\" (UID: \"9d700614-24d2-4917-bfba-1fd08e4e62c6\") " Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.854094 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d700614-24d2-4917-bfba-1fd08e4e62c6-kube-api-access-27njj" (OuterVolumeSpecName: "kube-api-access-27njj") pod "9d700614-24d2-4917-bfba-1fd08e4e62c6" (UID: "9d700614-24d2-4917-bfba-1fd08e4e62c6"). InnerVolumeSpecName "kube-api-access-27njj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.855373 4982 scope.go:117] "RemoveContainer" containerID="c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde" Jan 23 09:35:55 crc kubenswrapper[4982]: E0123 09:35:55.855855 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde\": container with ID starting with c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde not found: ID does not exist" containerID="c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.855904 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde"} err="failed to get container status \"c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde\": rpc error: code = NotFound desc = could not find container \"c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde\": container with ID starting with c1f8626a5cb8e4aefa8a76660daf4636fe3137aaa1619715cc80d3b9687e4cde not found: ID does not exist" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.855930 4982 scope.go:117] "RemoveContainer" containerID="d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6" Jan 23 09:35:55 crc kubenswrapper[4982]: E0123 09:35:55.856237 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6\": container with ID starting with d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6 not found: ID does not exist" containerID="d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.856253 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6"} err="failed to get container status \"d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6\": rpc error: code = NotFound desc = could not find container \"d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6\": container with ID starting with d60b891f35bcb706ec17d6539d54b6148c5aec87faf9289dce7ded666997d4b6 not found: ID does not exist" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.905471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d700614-24d2-4917-bfba-1fd08e4e62c6" (UID: "9d700614-24d2-4917-bfba-1fd08e4e62c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.907422 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d700614-24d2-4917-bfba-1fd08e4e62c6" (UID: "9d700614-24d2-4917-bfba-1fd08e4e62c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.918967 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-config" (OuterVolumeSpecName: "config") pod "9d700614-24d2-4917-bfba-1fd08e4e62c6" (UID: "9d700614-24d2-4917-bfba-1fd08e4e62c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.920302 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d700614-24d2-4917-bfba-1fd08e4e62c6" (UID: "9d700614-24d2-4917-bfba-1fd08e4e62c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.951058 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.951093 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.951102 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27njj\" (UniqueName: \"kubernetes.io/projected/9d700614-24d2-4917-bfba-1fd08e4e62c6-kube-api-access-27njj\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.951113 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:55 crc kubenswrapper[4982]: I0123 09:35:55.951122 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d700614-24d2-4917-bfba-1fd08e4e62c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:35:56 crc kubenswrapper[4982]: I0123 09:35:56.123857 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b588dccf-tc4xx"] Jan 23 09:35:56 crc kubenswrapper[4982]: I0123 09:35:56.133248 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55b588dccf-tc4xx"] Jan 23 09:35:57 crc kubenswrapper[4982]: I0123 09:35:57.839460 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d700614-24d2-4917-bfba-1fd08e4e62c6" path="/var/lib/kubelet/pods/9d700614-24d2-4917-bfba-1fd08e4e62c6/volumes" Jan 23 09:35:59 crc kubenswrapper[4982]: I0123 09:35:59.830700 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:35:59 crc kubenswrapper[4982]: E0123 09:35:59.831281 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:36:03 crc kubenswrapper[4982]: I0123 09:36:03.134820 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:36:03 crc kubenswrapper[4982]: I0123 09:36:03.135787 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 09:36:04 crc kubenswrapper[4982]: I0123 09:36:04.151934 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0338c87c-5dfe-406d-adde-089b0e43220f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.115:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:36:04 crc kubenswrapper[4982]: I0123 09:36:04.151953 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0338c87c-5dfe-406d-adde-089b0e43220f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.115:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:36:10 crc kubenswrapper[4982]: I0123 09:36:10.827130 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:36:10 crc kubenswrapper[4982]: E0123 09:36:10.828055 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:36:13 crc kubenswrapper[4982]: I0123 09:36:13.142566 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:36:13 crc kubenswrapper[4982]: I0123 09:36:13.143870 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 09:36:13 crc kubenswrapper[4982]: I0123 09:36:13.144206 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:36:13 crc kubenswrapper[4982]: I0123 09:36:13.144269 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 09:36:13 crc kubenswrapper[4982]: I0123 09:36:13.149905 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:36:13 crc kubenswrapper[4982]: I0123 09:36:13.150575 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.467521 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gr5c7"] Jan 23 09:36:14 crc kubenswrapper[4982]: E0123 09:36:14.468274 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerName="init" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.468286 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerName="init" Jan 23 09:36:14 crc kubenswrapper[4982]: E0123 09:36:14.468324 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerName="dnsmasq-dns" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.468332 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerName="dnsmasq-dns" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.494491 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d700614-24d2-4917-bfba-1fd08e4e62c6" containerName="dnsmasq-dns" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.499093 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr5c7"] Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.499249 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.580508 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-utilities\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.580810 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdx9\" (UniqueName: \"kubernetes.io/projected/8cbbe099-36e5-497f-aabc-398007761510-kube-api-access-kwdx9\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.580912 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-catalog-content\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.683332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-utilities\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.683383 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwdx9\" (UniqueName: \"kubernetes.io/projected/8cbbe099-36e5-497f-aabc-398007761510-kube-api-access-kwdx9\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.683408 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-catalog-content\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.683984 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-catalog-content\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.684145 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-utilities\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.702254 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwdx9\" (UniqueName: \"kubernetes.io/projected/8cbbe099-36e5-497f-aabc-398007761510-kube-api-access-kwdx9\") pod \"community-operators-gr5c7\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:14 crc kubenswrapper[4982]: I0123 09:36:14.827405 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:15 crc kubenswrapper[4982]: I0123 09:36:15.438591 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr5c7"] Jan 23 09:36:16 crc kubenswrapper[4982]: I0123 09:36:16.017376 4982 generic.go:334] "Generic (PLEG): container finished" podID="8cbbe099-36e5-497f-aabc-398007761510" containerID="013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be" exitCode=0 Jan 23 09:36:16 crc kubenswrapper[4982]: I0123 09:36:16.017557 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5c7" event={"ID":"8cbbe099-36e5-497f-aabc-398007761510","Type":"ContainerDied","Data":"013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be"} Jan 23 09:36:16 crc kubenswrapper[4982]: I0123 09:36:16.018114 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5c7" event={"ID":"8cbbe099-36e5-497f-aabc-398007761510","Type":"ContainerStarted","Data":"6d6b7e31e3f5c5ad4b12954ee5694cfa15bf60c22f04aea3aba2003f61e3114a"} Jan 23 09:36:18 crc kubenswrapper[4982]: I0123 09:36:18.038912 4982 generic.go:334] "Generic (PLEG): container finished" podID="8cbbe099-36e5-497f-aabc-398007761510" containerID="947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6" exitCode=0 Jan 23 09:36:18 crc kubenswrapper[4982]: I0123 09:36:18.039317 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5c7" event={"ID":"8cbbe099-36e5-497f-aabc-398007761510","Type":"ContainerDied","Data":"947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6"} Jan 23 09:36:19 crc kubenswrapper[4982]: I0123 09:36:19.051851 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5c7" event={"ID":"8cbbe099-36e5-497f-aabc-398007761510","Type":"ContainerStarted","Data":"060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54"} Jan 23 09:36:19 crc kubenswrapper[4982]: I0123 09:36:19.076455 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gr5c7" podStartSLOduration=2.654629207 podStartE2EDuration="5.076436746s" podCreationTimestamp="2026-01-23 09:36:14 +0000 UTC" firstStartedPulling="2026-01-23 09:36:16.019841157 +0000 UTC m=+6050.500204543" lastFinishedPulling="2026-01-23 09:36:18.441648696 +0000 UTC m=+6052.922012082" observedRunningTime="2026-01-23 09:36:19.071981775 +0000 UTC m=+6053.552345161" watchObservedRunningTime="2026-01-23 09:36:19.076436746 +0000 UTC m=+6053.556800122" Jan 23 09:36:24 crc kubenswrapper[4982]: I0123 09:36:24.828845 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:24 crc kubenswrapper[4982]: I0123 09:36:24.829380 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:24 crc kubenswrapper[4982]: I0123 09:36:24.880573 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:25 crc kubenswrapper[4982]: I0123 09:36:25.159559 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:25 crc kubenswrapper[4982]: I0123 09:36:25.856137 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:36:26 crc kubenswrapper[4982]: I0123 09:36:26.121144 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"ed33b1a30507fbf99e82a0af06fe01d0b4796979bfbb452c4c776529e48b651d"} Jan 23 09:36:27 crc kubenswrapper[4982]: I0123 09:36:27.455881 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gr5c7"] Jan 23 09:36:27 crc kubenswrapper[4982]: I0123 09:36:27.456840 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gr5c7" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="registry-server" containerID="cri-o://060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54" gracePeriod=2 Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.009711 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.114318 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-catalog-content\") pod \"8cbbe099-36e5-497f-aabc-398007761510\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.114392 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-utilities\") pod \"8cbbe099-36e5-497f-aabc-398007761510\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.114680 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwdx9\" (UniqueName: \"kubernetes.io/projected/8cbbe099-36e5-497f-aabc-398007761510-kube-api-access-kwdx9\") pod \"8cbbe099-36e5-497f-aabc-398007761510\" (UID: \"8cbbe099-36e5-497f-aabc-398007761510\") " Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.115511 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-utilities" (OuterVolumeSpecName: "utilities") pod "8cbbe099-36e5-497f-aabc-398007761510" (UID: "8cbbe099-36e5-497f-aabc-398007761510"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.120146 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cbbe099-36e5-497f-aabc-398007761510-kube-api-access-kwdx9" (OuterVolumeSpecName: "kube-api-access-kwdx9") pod "8cbbe099-36e5-497f-aabc-398007761510" (UID: "8cbbe099-36e5-497f-aabc-398007761510"). InnerVolumeSpecName "kube-api-access-kwdx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.153683 4982 generic.go:334] "Generic (PLEG): container finished" podID="8cbbe099-36e5-497f-aabc-398007761510" containerID="060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54" exitCode=0 Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.153729 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5c7" event={"ID":"8cbbe099-36e5-497f-aabc-398007761510","Type":"ContainerDied","Data":"060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54"} Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.153755 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5c7" event={"ID":"8cbbe099-36e5-497f-aabc-398007761510","Type":"ContainerDied","Data":"6d6b7e31e3f5c5ad4b12954ee5694cfa15bf60c22f04aea3aba2003f61e3114a"} Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.153762 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr5c7" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.153772 4982 scope.go:117] "RemoveContainer" containerID="060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.167921 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cbbe099-36e5-497f-aabc-398007761510" (UID: "8cbbe099-36e5-497f-aabc-398007761510"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.205536 4982 scope.go:117] "RemoveContainer" containerID="947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.227556 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwdx9\" (UniqueName: \"kubernetes.io/projected/8cbbe099-36e5-497f-aabc-398007761510-kube-api-access-kwdx9\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.228173 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.228374 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cbbe099-36e5-497f-aabc-398007761510-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.369807 4982 scope.go:117] "RemoveContainer" containerID="013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.391860 4982 scope.go:117] "RemoveContainer" containerID="060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54" Jan 23 09:36:29 crc kubenswrapper[4982]: E0123 09:36:29.392227 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54\": container with ID starting with 060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54 not found: ID does not exist" containerID="060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.392259 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54"} err="failed to get container status \"060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54\": rpc error: code = NotFound desc = could not find container \"060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54\": container with ID starting with 060669130d02a898eeeec93cf3e1f1bfb287c450cf7388caf18553de7501fd54 not found: ID does not exist" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.392282 4982 scope.go:117] "RemoveContainer" containerID="947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6" Jan 23 09:36:29 crc kubenswrapper[4982]: E0123 09:36:29.392676 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6\": container with ID starting with 947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6 not found: ID does not exist" containerID="947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.392696 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6"} err="failed to get container status \"947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6\": rpc error: code = NotFound desc = could not find container \"947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6\": container with ID starting with 947de52f7351a06842243cfdd2e5e56835976cf112abe9af13d42354ceedcba6 not found: ID does not exist" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.392708 4982 scope.go:117] "RemoveContainer" containerID="013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be" Jan 23 09:36:29 crc kubenswrapper[4982]: E0123 09:36:29.392964 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be\": container with ID starting with 013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be not found: ID does not exist" containerID="013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.393004 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be"} err="failed to get container status \"013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be\": rpc error: code = NotFound desc = could not find container \"013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be\": container with ID starting with 013257ba75ede5c3f189d3f03a63494090e96816b3dda8674a5afe21f185d5be not found: ID does not exist" Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.490405 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gr5c7"] Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.499846 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gr5c7"] Jan 23 09:36:29 crc kubenswrapper[4982]: I0123 09:36:29.838317 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cbbe099-36e5-497f-aabc-398007761510" path="/var/lib/kubelet/pods/8cbbe099-36e5-497f-aabc-398007761510/volumes" Jan 23 09:36:34 crc kubenswrapper[4982]: I0123 09:36:34.045301 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-03a3-account-create-update-9nsdf"] Jan 23 09:36:34 crc kubenswrapper[4982]: I0123 09:36:34.058155 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jnrtx"] Jan 23 09:36:34 crc kubenswrapper[4982]: I0123 09:36:34.068638 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-03a3-account-create-update-9nsdf"] Jan 23 09:36:34 crc kubenswrapper[4982]: I0123 09:36:34.079100 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jnrtx"] Jan 23 09:36:35 crc kubenswrapper[4982]: I0123 09:36:35.839881 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13511325-c990-4fae-80f3-4784786ca6f1" path="/var/lib/kubelet/pods/13511325-c990-4fae-80f3-4784786ca6f1/volumes" Jan 23 09:36:35 crc kubenswrapper[4982]: I0123 09:36:35.840781 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b0dedd0-c75f-423f-9da4-b0deef1e27cf" path="/var/lib/kubelet/pods/6b0dedd0-c75f-423f-9da4-b0deef1e27cf/volumes" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.579363 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fpsz6"] Jan 23 09:36:37 crc kubenswrapper[4982]: E0123 09:36:37.580140 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="registry-server" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.580158 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="registry-server" Jan 23 09:36:37 crc kubenswrapper[4982]: E0123 09:36:37.580182 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="extract-content" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.580189 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="extract-content" Jan 23 09:36:37 crc kubenswrapper[4982]: E0123 09:36:37.580219 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="extract-utilities" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.580226 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="extract-utilities" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.580455 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cbbe099-36e5-497f-aabc-398007761510" containerName="registry-server" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.581203 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.583568 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.583979 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-wqjbd" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.584350 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.594474 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fpsz6"] Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.605696 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kkldr"] Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.608266 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.631788 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kkldr"] Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.718890 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-run-ovn\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.718966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-run\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719006 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-log\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719049 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-combined-ca-bundle\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719085 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-etc-ovs\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719156 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-run\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719175 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj64k\" (UniqueName: \"kubernetes.io/projected/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-kube-api-access-hj64k\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719201 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9e18733-c4a2-4870-b4c4-bdb7133bab43-scripts\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719231 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-log-ovn\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719291 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-lib\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719351 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nktv\" (UniqueName: \"kubernetes.io/projected/b9e18733-c4a2-4870-b4c4-bdb7133bab43-kube-api-access-5nktv\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719384 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-ovn-controller-tls-certs\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.719419 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-scripts\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.822157 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-run-ovn\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.822216 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-run-ovn\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.822296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-run\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.822687 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-run\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-log\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824225 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-combined-ca-bundle\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824275 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-etc-ovs\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824232 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-log\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824403 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-run\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824484 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-run\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824513 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj64k\" (UniqueName: \"kubernetes.io/projected/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-kube-api-access-hj64k\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824551 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9e18733-c4a2-4870-b4c4-bdb7133bab43-scripts\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824594 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-log-ovn\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824594 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-etc-ovs\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824823 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-var-log-ovn\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.824870 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-lib\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.825005 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nktv\" (UniqueName: \"kubernetes.io/projected/b9e18733-c4a2-4870-b4c4-bdb7133bab43-kube-api-access-5nktv\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.825070 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-ovn-controller-tls-certs\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.825099 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b9e18733-c4a2-4870-b4c4-bdb7133bab43-var-lib\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.825130 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-scripts\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.827084 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9e18733-c4a2-4870-b4c4-bdb7133bab43-scripts\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.827858 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-scripts\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.830009 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-ovn-controller-tls-certs\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.837019 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-combined-ca-bundle\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.842393 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj64k\" (UniqueName: \"kubernetes.io/projected/bcccbc39-8bcd-4bf4-81a8-5078ae7329c7-kube-api-access-hj64k\") pod \"ovn-controller-fpsz6\" (UID: \"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7\") " pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.845499 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nktv\" (UniqueName: \"kubernetes.io/projected/b9e18733-c4a2-4870-b4c4-bdb7133bab43-kube-api-access-5nktv\") pod \"ovn-controller-ovs-kkldr\" (UID: \"b9e18733-c4a2-4870-b4c4-bdb7133bab43\") " pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.909720 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:37 crc kubenswrapper[4982]: I0123 09:36:37.931980 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:38 crc kubenswrapper[4982]: I0123 09:36:38.596505 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fpsz6"] Jan 23 09:36:38 crc kubenswrapper[4982]: I0123 09:36:38.884863 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kkldr"] Jan 23 09:36:38 crc kubenswrapper[4982]: W0123 09:36:38.888799 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9e18733_c4a2_4870_b4c4_bdb7133bab43.slice/crio-379e1c0915d397961d6d6425015d6d6aebe4d297011ce985c38a457fe307464d WatchSource:0}: Error finding container 379e1c0915d397961d6d6425015d6d6aebe4d297011ce985c38a457fe307464d: Status 404 returned error can't find the container with id 379e1c0915d397961d6d6425015d6d6aebe4d297011ce985c38a457fe307464d Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.275537 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6" event={"ID":"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7","Type":"ContainerStarted","Data":"ba4dfe9ec46e1567ee2d6d36ffe6c764fc7c53b901f038debfcb293e9ef08d33"} Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.275950 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6" event={"ID":"bcccbc39-8bcd-4bf4-81a8-5078ae7329c7","Type":"ContainerStarted","Data":"341fd5b2c24149268151bcbfbd59e48856a56bfe9f705ebb15ee2844c82d389b"} Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.276276 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fpsz6" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.278217 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kkldr" event={"ID":"b9e18733-c4a2-4870-b4c4-bdb7133bab43","Type":"ContainerStarted","Data":"611df470b85c4dcc4a6c75a276495ee661ea7b507cfd5cb47c0eee3923fe228b"} Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.278251 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kkldr" event={"ID":"b9e18733-c4a2-4870-b4c4-bdb7133bab43","Type":"ContainerStarted","Data":"379e1c0915d397961d6d6425015d6d6aebe4d297011ce985c38a457fe307464d"} Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.332281 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qvfjr"] Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.336008 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.340454 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.356500 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qvfjr"] Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.376144 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fpsz6" podStartSLOduration=2.3761246910000002 podStartE2EDuration="2.376124691s" podCreationTimestamp="2026-01-23 09:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:36:39.315884525 +0000 UTC m=+6073.796247901" watchObservedRunningTime="2026-01-23 09:36:39.376124691 +0000 UTC m=+6073.856488077" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.498021 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240ff876-28c0-4c64-8157-205c9719f944-combined-ca-bundle\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.498097 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/240ff876-28c0-4c64-8157-205c9719f944-ovs-rundir\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.498140 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/240ff876-28c0-4c64-8157-205c9719f944-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.498340 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ff876-28c0-4c64-8157-205c9719f944-config\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.498404 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltwhd\" (UniqueName: \"kubernetes.io/projected/240ff876-28c0-4c64-8157-205c9719f944-kube-api-access-ltwhd\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.498518 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/240ff876-28c0-4c64-8157-205c9719f944-ovn-rundir\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.600358 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/240ff876-28c0-4c64-8157-205c9719f944-ovs-rundir\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.600436 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/240ff876-28c0-4c64-8157-205c9719f944-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.600502 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ff876-28c0-4c64-8157-205c9719f944-config\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.600525 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltwhd\" (UniqueName: \"kubernetes.io/projected/240ff876-28c0-4c64-8157-205c9719f944-kube-api-access-ltwhd\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.600552 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/240ff876-28c0-4c64-8157-205c9719f944-ovn-rundir\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.600674 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240ff876-28c0-4c64-8157-205c9719f944-combined-ca-bundle\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.600758 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/240ff876-28c0-4c64-8157-205c9719f944-ovs-rundir\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.601016 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/240ff876-28c0-4c64-8157-205c9719f944-ovn-rundir\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.601716 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ff876-28c0-4c64-8157-205c9719f944-config\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.606277 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/240ff876-28c0-4c64-8157-205c9719f944-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.606301 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/240ff876-28c0-4c64-8157-205c9719f944-combined-ca-bundle\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.616718 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltwhd\" (UniqueName: \"kubernetes.io/projected/240ff876-28c0-4c64-8157-205c9719f944-kube-api-access-ltwhd\") pod \"ovn-controller-metrics-qvfjr\" (UID: \"240ff876-28c0-4c64-8157-205c9719f944\") " pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:39 crc kubenswrapper[4982]: I0123 09:36:39.674115 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qvfjr" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.043396 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hmt2w"] Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.055150 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hmt2w"] Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.231573 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qvfjr"] Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.277682 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-jhg7l"] Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.279720 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.293018 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-jhg7l"] Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.324111 4982 generic.go:334] "Generic (PLEG): container finished" podID="b9e18733-c4a2-4870-b4c4-bdb7133bab43" containerID="611df470b85c4dcc4a6c75a276495ee661ea7b507cfd5cb47c0eee3923fe228b" exitCode=0 Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.324217 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kkldr" event={"ID":"b9e18733-c4a2-4870-b4c4-bdb7133bab43","Type":"ContainerDied","Data":"611df470b85c4dcc4a6c75a276495ee661ea7b507cfd5cb47c0eee3923fe228b"} Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.331158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qvfjr" event={"ID":"240ff876-28c0-4c64-8157-205c9719f944","Type":"ContainerStarted","Data":"57b0c9a84160b62a29e288613eeb09ad376f7f8d33f40c946ffbe5b8dd0130b2"} Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.417195 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m4kd\" (UniqueName: \"kubernetes.io/projected/e24b3bf7-b1c7-47da-8799-b5888e75bda2-kube-api-access-4m4kd\") pod \"octavia-db-create-jhg7l\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.417684 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24b3bf7-b1c7-47da-8799-b5888e75bda2-operator-scripts\") pod \"octavia-db-create-jhg7l\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.519371 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m4kd\" (UniqueName: \"kubernetes.io/projected/e24b3bf7-b1c7-47da-8799-b5888e75bda2-kube-api-access-4m4kd\") pod \"octavia-db-create-jhg7l\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.519477 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24b3bf7-b1c7-47da-8799-b5888e75bda2-operator-scripts\") pod \"octavia-db-create-jhg7l\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.520155 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24b3bf7-b1c7-47da-8799-b5888e75bda2-operator-scripts\") pod \"octavia-db-create-jhg7l\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.539976 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m4kd\" (UniqueName: \"kubernetes.io/projected/e24b3bf7-b1c7-47da-8799-b5888e75bda2-kube-api-access-4m4kd\") pod \"octavia-db-create-jhg7l\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:40 crc kubenswrapper[4982]: I0123 09:36:40.700790 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.041710 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-4217-account-create-update-hjp95"] Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.043995 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.045698 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.053305 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-4217-account-create-update-hjp95"] Jan 23 09:36:41 crc kubenswrapper[4982]: W0123 09:36:41.222513 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode24b3bf7_b1c7_47da_8799_b5888e75bda2.slice/crio-955addbeb76611a1b9633dfad04fcda198a0ea9c303b15fec6404c654cb84b7b WatchSource:0}: Error finding container 955addbeb76611a1b9633dfad04fcda198a0ea9c303b15fec6404c654cb84b7b: Status 404 returned error can't find the container with id 955addbeb76611a1b9633dfad04fcda198a0ea9c303b15fec6404c654cb84b7b Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.222848 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-jhg7l"] Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.237610 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ggh2\" (UniqueName: \"kubernetes.io/projected/73ac8822-3255-4dd1-916e-283f0711ae83-kube-api-access-8ggh2\") pod \"octavia-4217-account-create-update-hjp95\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.237823 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ac8822-3255-4dd1-916e-283f0711ae83-operator-scripts\") pod \"octavia-4217-account-create-update-hjp95\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.339901 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ggh2\" (UniqueName: \"kubernetes.io/projected/73ac8822-3255-4dd1-916e-283f0711ae83-kube-api-access-8ggh2\") pod \"octavia-4217-account-create-update-hjp95\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.340065 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ac8822-3255-4dd1-916e-283f0711ae83-operator-scripts\") pod \"octavia-4217-account-create-update-hjp95\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.341043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ac8822-3255-4dd1-916e-283f0711ae83-operator-scripts\") pod \"octavia-4217-account-create-update-hjp95\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.344034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kkldr" event={"ID":"b9e18733-c4a2-4870-b4c4-bdb7133bab43","Type":"ContainerStarted","Data":"6dbd32c63ce2e4fa53f59ec6f38755ac6f4ffdbd9956e821c1c9cd0ea908060c"} Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.344080 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kkldr" event={"ID":"b9e18733-c4a2-4870-b4c4-bdb7133bab43","Type":"ContainerStarted","Data":"0a57eb94e65000e183e0382f2fb1d7045e82655bfd89a48819f85db3b16ddc5e"} Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.344145 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.344191 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.346213 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qvfjr" event={"ID":"240ff876-28c0-4c64-8157-205c9719f944","Type":"ContainerStarted","Data":"6ca7d6c260c35e133b6cace17818dfb8e0a32f4fbd01fe6e4f92c8fde98aba8f"} Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.348518 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-jhg7l" event={"ID":"e24b3bf7-b1c7-47da-8799-b5888e75bda2","Type":"ContainerStarted","Data":"955addbeb76611a1b9633dfad04fcda198a0ea9c303b15fec6404c654cb84b7b"} Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.367397 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kkldr" podStartSLOduration=4.367378634 podStartE2EDuration="4.367378634s" podCreationTimestamp="2026-01-23 09:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:36:41.362844391 +0000 UTC m=+6075.843207777" watchObservedRunningTime="2026-01-23 09:36:41.367378634 +0000 UTC m=+6075.847742020" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.374344 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ggh2\" (UniqueName: \"kubernetes.io/projected/73ac8822-3255-4dd1-916e-283f0711ae83-kube-api-access-8ggh2\") pod \"octavia-4217-account-create-update-hjp95\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.431052 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qvfjr" podStartSLOduration=2.431030833 podStartE2EDuration="2.431030833s" podCreationTimestamp="2026-01-23 09:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:36:41.385375013 +0000 UTC m=+6075.865738399" watchObservedRunningTime="2026-01-23 09:36:41.431030833 +0000 UTC m=+6075.911394219" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.673085 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:41 crc kubenswrapper[4982]: I0123 09:36:41.843880 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a403bece-dda1-45c7-a804-4a1d7329d2d1" path="/var/lib/kubelet/pods/a403bece-dda1-45c7-a804-4a1d7329d2d1/volumes" Jan 23 09:36:42 crc kubenswrapper[4982]: I0123 09:36:42.173279 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-4217-account-create-update-hjp95"] Jan 23 09:36:42 crc kubenswrapper[4982]: I0123 09:36:42.359646 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-4217-account-create-update-hjp95" event={"ID":"73ac8822-3255-4dd1-916e-283f0711ae83","Type":"ContainerStarted","Data":"cd56021449c2a863f931acd0e833147776108afb4e81a0834b558012ed9c383e"} Jan 23 09:36:42 crc kubenswrapper[4982]: I0123 09:36:42.361495 4982 generic.go:334] "Generic (PLEG): container finished" podID="e24b3bf7-b1c7-47da-8799-b5888e75bda2" containerID="bbc5c35e8ed7bf8ee3ef216257e5254e47f351e3348fb3a5474101f9c202141d" exitCode=0 Jan 23 09:36:42 crc kubenswrapper[4982]: I0123 09:36:42.361528 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-jhg7l" event={"ID":"e24b3bf7-b1c7-47da-8799-b5888e75bda2","Type":"ContainerDied","Data":"bbc5c35e8ed7bf8ee3ef216257e5254e47f351e3348fb3a5474101f9c202141d"} Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.374746 4982 generic.go:334] "Generic (PLEG): container finished" podID="73ac8822-3255-4dd1-916e-283f0711ae83" containerID="885b8458a4ac3290eb1fff620a9a2e89da32b06b3d62b667462de46fab70a36a" exitCode=0 Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.374853 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-4217-account-create-update-hjp95" event={"ID":"73ac8822-3255-4dd1-916e-283f0711ae83","Type":"ContainerDied","Data":"885b8458a4ac3290eb1fff620a9a2e89da32b06b3d62b667462de46fab70a36a"} Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.737831 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.909564 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24b3bf7-b1c7-47da-8799-b5888e75bda2-operator-scripts\") pod \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.909990 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m4kd\" (UniqueName: \"kubernetes.io/projected/e24b3bf7-b1c7-47da-8799-b5888e75bda2-kube-api-access-4m4kd\") pod \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\" (UID: \"e24b3bf7-b1c7-47da-8799-b5888e75bda2\") " Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.910445 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24b3bf7-b1c7-47da-8799-b5888e75bda2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e24b3bf7-b1c7-47da-8799-b5888e75bda2" (UID: "e24b3bf7-b1c7-47da-8799-b5888e75bda2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.910637 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24b3bf7-b1c7-47da-8799-b5888e75bda2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:43 crc kubenswrapper[4982]: I0123 09:36:43.916566 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24b3bf7-b1c7-47da-8799-b5888e75bda2-kube-api-access-4m4kd" (OuterVolumeSpecName: "kube-api-access-4m4kd") pod "e24b3bf7-b1c7-47da-8799-b5888e75bda2" (UID: "e24b3bf7-b1c7-47da-8799-b5888e75bda2"). InnerVolumeSpecName "kube-api-access-4m4kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.012754 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m4kd\" (UniqueName: \"kubernetes.io/projected/e24b3bf7-b1c7-47da-8799-b5888e75bda2-kube-api-access-4m4kd\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.256670 4982 scope.go:117] "RemoveContainer" containerID="867922a928daaf4a68f2b8b4f38630d39af5f3203e4e417a82c2c909023d80b5" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.279601 4982 scope.go:117] "RemoveContainer" containerID="2c768109e125c62aa0ef2f8c52e9255241c126b528466715ba09f7349f0be88c" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.353157 4982 scope.go:117] "RemoveContainer" containerID="ed5cc9abea52b21c7b9db6a511346dd72b7bcb177aec74a6527b172ed5456083" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.391119 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-jhg7l" event={"ID":"e24b3bf7-b1c7-47da-8799-b5888e75bda2","Type":"ContainerDied","Data":"955addbeb76611a1b9633dfad04fcda198a0ea9c303b15fec6404c654cb84b7b"} Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.391171 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955addbeb76611a1b9633dfad04fcda198a0ea9c303b15fec6404c654cb84b7b" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.391138 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-jhg7l" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.622235 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.727151 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ac8822-3255-4dd1-916e-283f0711ae83-operator-scripts\") pod \"73ac8822-3255-4dd1-916e-283f0711ae83\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.727374 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ggh2\" (UniqueName: \"kubernetes.io/projected/73ac8822-3255-4dd1-916e-283f0711ae83-kube-api-access-8ggh2\") pod \"73ac8822-3255-4dd1-916e-283f0711ae83\" (UID: \"73ac8822-3255-4dd1-916e-283f0711ae83\") " Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.728014 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ac8822-3255-4dd1-916e-283f0711ae83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73ac8822-3255-4dd1-916e-283f0711ae83" (UID: "73ac8822-3255-4dd1-916e-283f0711ae83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.730594 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ac8822-3255-4dd1-916e-283f0711ae83-kube-api-access-8ggh2" (OuterVolumeSpecName: "kube-api-access-8ggh2") pod "73ac8822-3255-4dd1-916e-283f0711ae83" (UID: "73ac8822-3255-4dd1-916e-283f0711ae83"). InnerVolumeSpecName "kube-api-access-8ggh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.830299 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ggh2\" (UniqueName: \"kubernetes.io/projected/73ac8822-3255-4dd1-916e-283f0711ae83-kube-api-access-8ggh2\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:44 crc kubenswrapper[4982]: I0123 09:36:44.830325 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ac8822-3255-4dd1-916e-283f0711ae83-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:45 crc kubenswrapper[4982]: I0123 09:36:45.440507 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-4217-account-create-update-hjp95" event={"ID":"73ac8822-3255-4dd1-916e-283f0711ae83","Type":"ContainerDied","Data":"cd56021449c2a863f931acd0e833147776108afb4e81a0834b558012ed9c383e"} Jan 23 09:36:45 crc kubenswrapper[4982]: I0123 09:36:45.440563 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd56021449c2a863f931acd0e833147776108afb4e81a0834b558012ed9c383e" Jan 23 09:36:45 crc kubenswrapper[4982]: I0123 09:36:45.440658 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-4217-account-create-update-hjp95" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.302915 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-j8crf"] Jan 23 09:36:46 crc kubenswrapper[4982]: E0123 09:36:46.303795 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ac8822-3255-4dd1-916e-283f0711ae83" containerName="mariadb-account-create-update" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.303817 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ac8822-3255-4dd1-916e-283f0711ae83" containerName="mariadb-account-create-update" Jan 23 09:36:46 crc kubenswrapper[4982]: E0123 09:36:46.303871 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24b3bf7-b1c7-47da-8799-b5888e75bda2" containerName="mariadb-database-create" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.303895 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24b3bf7-b1c7-47da-8799-b5888e75bda2" containerName="mariadb-database-create" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.304134 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ac8822-3255-4dd1-916e-283f0711ae83" containerName="mariadb-account-create-update" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.304182 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24b3bf7-b1c7-47da-8799-b5888e75bda2" containerName="mariadb-database-create" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.305009 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.315760 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-j8crf"] Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.359904 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqxq\" (UniqueName: \"kubernetes.io/projected/a524b9a0-514a-4af7-a8f2-2f0458db6db8-kube-api-access-6zqxq\") pod \"octavia-persistence-db-create-j8crf\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.359966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a524b9a0-514a-4af7-a8f2-2f0458db6db8-operator-scripts\") pod \"octavia-persistence-db-create-j8crf\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.462425 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zqxq\" (UniqueName: \"kubernetes.io/projected/a524b9a0-514a-4af7-a8f2-2f0458db6db8-kube-api-access-6zqxq\") pod \"octavia-persistence-db-create-j8crf\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.462487 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a524b9a0-514a-4af7-a8f2-2f0458db6db8-operator-scripts\") pod \"octavia-persistence-db-create-j8crf\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.463388 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a524b9a0-514a-4af7-a8f2-2f0458db6db8-operator-scripts\") pod \"octavia-persistence-db-create-j8crf\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.485091 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zqxq\" (UniqueName: \"kubernetes.io/projected/a524b9a0-514a-4af7-a8f2-2f0458db6db8-kube-api-access-6zqxq\") pod \"octavia-persistence-db-create-j8crf\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.627474 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.860025 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-d93b-account-create-update-m89hf"] Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.862292 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.866005 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.870374 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx4t2\" (UniqueName: \"kubernetes.io/projected/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-kube-api-access-wx4t2\") pod \"octavia-d93b-account-create-update-m89hf\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.871072 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-operator-scripts\") pod \"octavia-d93b-account-create-update-m89hf\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.883278 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d93b-account-create-update-m89hf"] Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.974424 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx4t2\" (UniqueName: \"kubernetes.io/projected/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-kube-api-access-wx4t2\") pod \"octavia-d93b-account-create-update-m89hf\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.974765 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-operator-scripts\") pod \"octavia-d93b-account-create-update-m89hf\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:46 crc kubenswrapper[4982]: I0123 09:36:46.975702 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-operator-scripts\") pod \"octavia-d93b-account-create-update-m89hf\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:47 crc kubenswrapper[4982]: I0123 09:36:47.011066 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx4t2\" (UniqueName: \"kubernetes.io/projected/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-kube-api-access-wx4t2\") pod \"octavia-d93b-account-create-update-m89hf\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:47 crc kubenswrapper[4982]: W0123 09:36:47.094354 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda524b9a0_514a_4af7_a8f2_2f0458db6db8.slice/crio-b4f7594c84f0567b16961979ba5fbe26b28382b0f225a35014a860584bf9e690 WatchSource:0}: Error finding container b4f7594c84f0567b16961979ba5fbe26b28382b0f225a35014a860584bf9e690: Status 404 returned error can't find the container with id b4f7594c84f0567b16961979ba5fbe26b28382b0f225a35014a860584bf9e690 Jan 23 09:36:47 crc kubenswrapper[4982]: I0123 09:36:47.096666 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-j8crf"] Jan 23 09:36:47 crc kubenswrapper[4982]: I0123 09:36:47.187779 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:47 crc kubenswrapper[4982]: I0123 09:36:47.461226 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-j8crf" event={"ID":"a524b9a0-514a-4af7-a8f2-2f0458db6db8","Type":"ContainerStarted","Data":"b84e9df49d998bde7cf54db0910bf03913d913e0dc954f505f10e9ed8d34663e"} Jan 23 09:36:47 crc kubenswrapper[4982]: I0123 09:36:47.461534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-j8crf" event={"ID":"a524b9a0-514a-4af7-a8f2-2f0458db6db8","Type":"ContainerStarted","Data":"b4f7594c84f0567b16961979ba5fbe26b28382b0f225a35014a860584bf9e690"} Jan 23 09:36:47 crc kubenswrapper[4982]: I0123 09:36:47.491063 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-persistence-db-create-j8crf" podStartSLOduration=1.491041348 podStartE2EDuration="1.491041348s" podCreationTimestamp="2026-01-23 09:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:36:47.476885467 +0000 UTC m=+6081.957248863" watchObservedRunningTime="2026-01-23 09:36:47.491041348 +0000 UTC m=+6081.971404744" Jan 23 09:36:47 crc kubenswrapper[4982]: I0123 09:36:47.663724 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-d93b-account-create-update-m89hf"] Jan 23 09:36:47 crc kubenswrapper[4982]: W0123 09:36:47.688534 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bbe8afc_68f3_44eb_bee2_f4b5783d32e0.slice/crio-d4d2115286e583816e44bdcf03ea1d5a2c5ebc6cf29035165911888394c583cb WatchSource:0}: Error finding container d4d2115286e583816e44bdcf03ea1d5a2c5ebc6cf29035165911888394c583cb: Status 404 returned error can't find the container with id d4d2115286e583816e44bdcf03ea1d5a2c5ebc6cf29035165911888394c583cb Jan 23 09:36:48 crc kubenswrapper[4982]: I0123 09:36:48.472243 4982 generic.go:334] "Generic (PLEG): container finished" podID="a524b9a0-514a-4af7-a8f2-2f0458db6db8" containerID="b84e9df49d998bde7cf54db0910bf03913d913e0dc954f505f10e9ed8d34663e" exitCode=0 Jan 23 09:36:48 crc kubenswrapper[4982]: I0123 09:36:48.472380 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-j8crf" event={"ID":"a524b9a0-514a-4af7-a8f2-2f0458db6db8","Type":"ContainerDied","Data":"b84e9df49d998bde7cf54db0910bf03913d913e0dc954f505f10e9ed8d34663e"} Jan 23 09:36:48 crc kubenswrapper[4982]: I0123 09:36:48.475047 4982 generic.go:334] "Generic (PLEG): container finished" podID="0bbe8afc-68f3-44eb-bee2-f4b5783d32e0" containerID="ae9f70ab14474e90d829beb8b6437d0ca92d7399d2a12a31d3a334f8572fe7e0" exitCode=0 Jan 23 09:36:48 crc kubenswrapper[4982]: I0123 09:36:48.475104 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d93b-account-create-update-m89hf" event={"ID":"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0","Type":"ContainerDied","Data":"ae9f70ab14474e90d829beb8b6437d0ca92d7399d2a12a31d3a334f8572fe7e0"} Jan 23 09:36:48 crc kubenswrapper[4982]: I0123 09:36:48.475158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d93b-account-create-update-m89hf" event={"ID":"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0","Type":"ContainerStarted","Data":"d4d2115286e583816e44bdcf03ea1d5a2c5ebc6cf29035165911888394c583cb"} Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.036104 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.044428 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.155714 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zqxq\" (UniqueName: \"kubernetes.io/projected/a524b9a0-514a-4af7-a8f2-2f0458db6db8-kube-api-access-6zqxq\") pod \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.156020 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx4t2\" (UniqueName: \"kubernetes.io/projected/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-kube-api-access-wx4t2\") pod \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.156078 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a524b9a0-514a-4af7-a8f2-2f0458db6db8-operator-scripts\") pod \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\" (UID: \"a524b9a0-514a-4af7-a8f2-2f0458db6db8\") " Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.156141 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-operator-scripts\") pod \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\" (UID: \"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0\") " Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.156674 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a524b9a0-514a-4af7-a8f2-2f0458db6db8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a524b9a0-514a-4af7-a8f2-2f0458db6db8" (UID: "a524b9a0-514a-4af7-a8f2-2f0458db6db8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.158245 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bbe8afc-68f3-44eb-bee2-f4b5783d32e0" (UID: "0bbe8afc-68f3-44eb-bee2-f4b5783d32e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.162863 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a524b9a0-514a-4af7-a8f2-2f0458db6db8-kube-api-access-6zqxq" (OuterVolumeSpecName: "kube-api-access-6zqxq") pod "a524b9a0-514a-4af7-a8f2-2f0458db6db8" (UID: "a524b9a0-514a-4af7-a8f2-2f0458db6db8"). InnerVolumeSpecName "kube-api-access-6zqxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.164126 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-kube-api-access-wx4t2" (OuterVolumeSpecName: "kube-api-access-wx4t2") pod "0bbe8afc-68f3-44eb-bee2-f4b5783d32e0" (UID: "0bbe8afc-68f3-44eb-bee2-f4b5783d32e0"). InnerVolumeSpecName "kube-api-access-wx4t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.258890 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx4t2\" (UniqueName: \"kubernetes.io/projected/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-kube-api-access-wx4t2\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.259177 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a524b9a0-514a-4af7-a8f2-2f0458db6db8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.259186 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.259197 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zqxq\" (UniqueName: \"kubernetes.io/projected/a524b9a0-514a-4af7-a8f2-2f0458db6db8-kube-api-access-6zqxq\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.495361 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-j8crf" event={"ID":"a524b9a0-514a-4af7-a8f2-2f0458db6db8","Type":"ContainerDied","Data":"b4f7594c84f0567b16961979ba5fbe26b28382b0f225a35014a860584bf9e690"} Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.495401 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f7594c84f0567b16961979ba5fbe26b28382b0f225a35014a860584bf9e690" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.495459 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-j8crf" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.498932 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-d93b-account-create-update-m89hf" event={"ID":"0bbe8afc-68f3-44eb-bee2-f4b5783d32e0","Type":"ContainerDied","Data":"d4d2115286e583816e44bdcf03ea1d5a2c5ebc6cf29035165911888394c583cb"} Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.498979 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d2115286e583816e44bdcf03ea1d5a2c5ebc6cf29035165911888394c583cb" Jan 23 09:36:50 crc kubenswrapper[4982]: I0123 09:36:50.498959 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-d93b-account-create-update-m89hf" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.352807 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6c84888f8c-rf6pd"] Jan 23 09:36:52 crc kubenswrapper[4982]: E0123 09:36:52.354140 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bbe8afc-68f3-44eb-bee2-f4b5783d32e0" containerName="mariadb-account-create-update" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.354161 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bbe8afc-68f3-44eb-bee2-f4b5783d32e0" containerName="mariadb-account-create-update" Jan 23 09:36:52 crc kubenswrapper[4982]: E0123 09:36:52.354223 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a524b9a0-514a-4af7-a8f2-2f0458db6db8" containerName="mariadb-database-create" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.354230 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a524b9a0-514a-4af7-a8f2-2f0458db6db8" containerName="mariadb-database-create" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.354440 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a524b9a0-514a-4af7-a8f2-2f0458db6db8" containerName="mariadb-database-create" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.354450 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bbe8afc-68f3-44eb-bee2-f4b5783d32e0" containerName="mariadb-account-create-update" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.356018 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.359325 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.359417 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-f7wzg" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.359465 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.359550 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.384009 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6c84888f8c-rf6pd"] Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.406008 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-ovndb-tls-certs\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.406110 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-scripts\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.406252 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-config-data-merged\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.406319 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-combined-ca-bundle\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.406434 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-octavia-run\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.406495 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-config-data\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.512708 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-ovndb-tls-certs\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.512821 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-scripts\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.512936 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-config-data-merged\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.513014 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-combined-ca-bundle\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.513059 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-octavia-run\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.513093 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-config-data\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.514485 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-config-data-merged\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.514518 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-octavia-run\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.520686 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-scripts\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.521164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-ovndb-tls-certs\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.522493 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-config-data\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.526738 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-combined-ca-bundle\") pod \"octavia-api-6c84888f8c-rf6pd\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:52 crc kubenswrapper[4982]: I0123 09:36:52.685562 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:36:53 crc kubenswrapper[4982]: W0123 09:36:53.218954 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5808496f_9e30_472e_85a0_edd30e941942.slice/crio-bf099678755f7ad57abc5fbc3df4a103f377d98180cf11c05c36463be45fa249 WatchSource:0}: Error finding container bf099678755f7ad57abc5fbc3df4a103f377d98180cf11c05c36463be45fa249: Status 404 returned error can't find the container with id bf099678755f7ad57abc5fbc3df4a103f377d98180cf11c05c36463be45fa249 Jan 23 09:36:53 crc kubenswrapper[4982]: I0123 09:36:53.229751 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6c84888f8c-rf6pd"] Jan 23 09:36:53 crc kubenswrapper[4982]: I0123 09:36:53.546268 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c84888f8c-rf6pd" event={"ID":"5808496f-9e30-472e-85a0-edd30e941942","Type":"ContainerStarted","Data":"bf099678755f7ad57abc5fbc3df4a103f377d98180cf11c05c36463be45fa249"} Jan 23 09:36:56 crc kubenswrapper[4982]: I0123 09:36:56.037060 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-62b62"] Jan 23 09:36:56 crc kubenswrapper[4982]: I0123 09:36:56.047847 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-62b62"] Jan 23 09:36:57 crc kubenswrapper[4982]: I0123 09:36:57.839872 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d651a9-601e-4859-abd7-03ba425d148e" path="/var/lib/kubelet/pods/e4d651a9-601e-4859-abd7-03ba425d148e/volumes" Jan 23 09:37:02 crc kubenswrapper[4982]: I0123 09:37:02.659699 4982 generic.go:334] "Generic (PLEG): container finished" podID="5808496f-9e30-472e-85a0-edd30e941942" containerID="bac8b59dbbea746723c3f02aa1f06b301d4b21c1b1b8f2c229db32d478339acc" exitCode=0 Jan 23 09:37:02 crc kubenswrapper[4982]: I0123 09:37:02.659833 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c84888f8c-rf6pd" event={"ID":"5808496f-9e30-472e-85a0-edd30e941942","Type":"ContainerDied","Data":"bac8b59dbbea746723c3f02aa1f06b301d4b21c1b1b8f2c229db32d478339acc"} Jan 23 09:37:03 crc kubenswrapper[4982]: I0123 09:37:03.677996 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c84888f8c-rf6pd" event={"ID":"5808496f-9e30-472e-85a0-edd30e941942","Type":"ContainerStarted","Data":"8135f62e3522ac8c609f4b299b47a5b0614b5de6703c04242121010146101b8d"} Jan 23 09:37:03 crc kubenswrapper[4982]: I0123 09:37:03.678535 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:37:03 crc kubenswrapper[4982]: I0123 09:37:03.678557 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:37:03 crc kubenswrapper[4982]: I0123 09:37:03.678569 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c84888f8c-rf6pd" event={"ID":"5808496f-9e30-472e-85a0-edd30e941942","Type":"ContainerStarted","Data":"6257371c3cf6cbe7e0e138c5eb843e130e1a3e4167df86b6b6b3e2a3628c327e"} Jan 23 09:37:03 crc kubenswrapper[4982]: I0123 09:37:03.701601 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6c84888f8c-rf6pd" podStartSLOduration=3.018091704 podStartE2EDuration="11.701570089s" podCreationTimestamp="2026-01-23 09:36:52 +0000 UTC" firstStartedPulling="2026-01-23 09:36:53.220831162 +0000 UTC m=+6087.701194548" lastFinishedPulling="2026-01-23 09:37:01.904309537 +0000 UTC m=+6096.384672933" observedRunningTime="2026-01-23 09:37:03.70011763 +0000 UTC m=+6098.180481016" watchObservedRunningTime="2026-01-23 09:37:03.701570089 +0000 UTC m=+6098.181933475" Jan 23 09:37:11 crc kubenswrapper[4982]: I0123 09:37:11.780743 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:37:12 crc kubenswrapper[4982]: I0123 09:37:12.963162 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fpsz6" Jan 23 09:37:12 crc kubenswrapper[4982]: I0123 09:37:12.989567 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.019057 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kkldr" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.146180 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fpsz6-config-qffkf"] Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.147667 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.151046 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.162508 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fpsz6-config-qffkf"] Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.189160 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msrbd\" (UniqueName: \"kubernetes.io/projected/61d36a4d-b732-4b75-9575-dfa9f94ce896-kube-api-access-msrbd\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.189290 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run-ovn\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.189345 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.189390 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-scripts\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.189431 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-log-ovn\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.189872 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-additional-scripts\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.292009 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run-ovn\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.292379 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run-ovn\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.292551 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.292648 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.292805 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-scripts\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.292930 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-log-ovn\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.293080 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-log-ovn\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.293401 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-additional-scripts\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.293557 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msrbd\" (UniqueName: \"kubernetes.io/projected/61d36a4d-b732-4b75-9575-dfa9f94ce896-kube-api-access-msrbd\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.294269 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-additional-scripts\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.294970 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-scripts\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.312402 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msrbd\" (UniqueName: \"kubernetes.io/projected/61d36a4d-b732-4b75-9575-dfa9f94ce896-kube-api-access-msrbd\") pod \"ovn-controller-fpsz6-config-qffkf\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:13 crc kubenswrapper[4982]: I0123 09:37:13.481173 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:14 crc kubenswrapper[4982]: I0123 09:37:14.011929 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fpsz6-config-qffkf"] Jan 23 09:37:14 crc kubenswrapper[4982]: I0123 09:37:14.808487 4982 generic.go:334] "Generic (PLEG): container finished" podID="61d36a4d-b732-4b75-9575-dfa9f94ce896" containerID="081d7b45a828452d88231f20d4291093aba7ccc4bd971d2929240842528f7e11" exitCode=0 Jan 23 09:37:14 crc kubenswrapper[4982]: I0123 09:37:14.808589 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6-config-qffkf" event={"ID":"61d36a4d-b732-4b75-9575-dfa9f94ce896","Type":"ContainerDied","Data":"081d7b45a828452d88231f20d4291093aba7ccc4bd971d2929240842528f7e11"} Jan 23 09:37:14 crc kubenswrapper[4982]: I0123 09:37:14.809051 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6-config-qffkf" event={"ID":"61d36a4d-b732-4b75-9575-dfa9f94ce896","Type":"ContainerStarted","Data":"17e17481148e2bc44272d9d130b87b4f21aa5325486da91809aa4ae33028a8a5"} Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.201616 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.244219 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.272921 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run-ovn\") pod \"61d36a4d-b732-4b75-9575-dfa9f94ce896\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.273034 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-log-ovn\") pod \"61d36a4d-b732-4b75-9575-dfa9f94ce896\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.273109 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-scripts\") pod \"61d36a4d-b732-4b75-9575-dfa9f94ce896\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.273214 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-additional-scripts\") pod \"61d36a4d-b732-4b75-9575-dfa9f94ce896\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.273231 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "61d36a4d-b732-4b75-9575-dfa9f94ce896" (UID: "61d36a4d-b732-4b75-9575-dfa9f94ce896"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.273305 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "61d36a4d-b732-4b75-9575-dfa9f94ce896" (UID: "61d36a4d-b732-4b75-9575-dfa9f94ce896"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.273335 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run\") pod \"61d36a4d-b732-4b75-9575-dfa9f94ce896\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.273935 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run" (OuterVolumeSpecName: "var-run") pod "61d36a4d-b732-4b75-9575-dfa9f94ce896" (UID: "61d36a4d-b732-4b75-9575-dfa9f94ce896"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.274155 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msrbd\" (UniqueName: \"kubernetes.io/projected/61d36a4d-b732-4b75-9575-dfa9f94ce896-kube-api-access-msrbd\") pod \"61d36a4d-b732-4b75-9575-dfa9f94ce896\" (UID: \"61d36a4d-b732-4b75-9575-dfa9f94ce896\") " Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.275022 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-scripts" (OuterVolumeSpecName: "scripts") pod "61d36a4d-b732-4b75-9575-dfa9f94ce896" (UID: "61d36a4d-b732-4b75-9575-dfa9f94ce896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.276068 4982 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.276095 4982 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.276105 4982 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/61d36a4d-b732-4b75-9575-dfa9f94ce896-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.276114 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.276303 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "61d36a4d-b732-4b75-9575-dfa9f94ce896" (UID: "61d36a4d-b732-4b75-9575-dfa9f94ce896"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.300520 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d36a4d-b732-4b75-9575-dfa9f94ce896-kube-api-access-msrbd" (OuterVolumeSpecName: "kube-api-access-msrbd") pod "61d36a4d-b732-4b75-9575-dfa9f94ce896" (UID: "61d36a4d-b732-4b75-9575-dfa9f94ce896"). InnerVolumeSpecName "kube-api-access-msrbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.378645 4982 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/61d36a4d-b732-4b75-9575-dfa9f94ce896-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.378705 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msrbd\" (UniqueName: \"kubernetes.io/projected/61d36a4d-b732-4b75-9575-dfa9f94ce896-kube-api-access-msrbd\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.831532 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6-config-qffkf" event={"ID":"61d36a4d-b732-4b75-9575-dfa9f94ce896","Type":"ContainerDied","Data":"17e17481148e2bc44272d9d130b87b4f21aa5325486da91809aa4ae33028a8a5"} Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.831575 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e17481148e2bc44272d9d130b87b4f21aa5325486da91809aa4ae33028a8a5" Jan 23 09:37:16 crc kubenswrapper[4982]: I0123 09:37:16.831578 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-qffkf" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.061757 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-drt6w"] Jan 23 09:37:17 crc kubenswrapper[4982]: E0123 09:37:17.062670 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d36a4d-b732-4b75-9575-dfa9f94ce896" containerName="ovn-config" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.062689 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d36a4d-b732-4b75-9575-dfa9f94ce896" containerName="ovn-config" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.062994 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d36a4d-b732-4b75-9575-dfa9f94ce896" containerName="ovn-config" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.064384 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.070295 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.070588 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.070772 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.088894 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-drt6w"] Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.099186 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc4b282-8200-4c45-8216-3be60dceea0e-config-data\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.099329 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc4b282-8200-4c45-8216-3be60dceea0e-scripts\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.099357 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/edc4b282-8200-4c45-8216-3be60dceea0e-hm-ports\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.099402 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/edc4b282-8200-4c45-8216-3be60dceea0e-config-data-merged\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.202288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc4b282-8200-4c45-8216-3be60dceea0e-config-data\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.202452 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc4b282-8200-4c45-8216-3be60dceea0e-scripts\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.202484 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/edc4b282-8200-4c45-8216-3be60dceea0e-hm-ports\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.202550 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/edc4b282-8200-4c45-8216-3be60dceea0e-config-data-merged\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.203067 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/edc4b282-8200-4c45-8216-3be60dceea0e-config-data-merged\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.203598 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/edc4b282-8200-4c45-8216-3be60dceea0e-hm-ports\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.207457 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc4b282-8200-4c45-8216-3be60dceea0e-config-data\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.207881 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc4b282-8200-4c45-8216-3be60dceea0e-scripts\") pod \"octavia-rsyslog-drt6w\" (UID: \"edc4b282-8200-4c45-8216-3be60dceea0e\") " pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.297937 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fpsz6-config-qffkf"] Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.317544 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fpsz6-config-qffkf"] Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.382775 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.438280 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fpsz6-config-mjjdc"] Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.439817 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.446969 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.473138 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fpsz6-config-mjjdc"] Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.515177 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run-ovn\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.515283 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-additional-scripts\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.515319 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-scripts\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.515382 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.515494 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-log-ovn\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.515514 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7rd\" (UniqueName: \"kubernetes.io/projected/18c181a4-0547-40d4-90ba-9a9bcf17fb86-kube-api-access-8s7rd\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.617809 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.618403 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-log-ovn\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.618433 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7rd\" (UniqueName: \"kubernetes.io/projected/18c181a4-0547-40d4-90ba-9a9bcf17fb86-kube-api-access-8s7rd\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.618460 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run-ovn\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.618524 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-additional-scripts\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.618559 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-scripts\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.619185 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-log-ovn\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.619231 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run-ovn\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.619287 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.619645 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-additional-scripts\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.624696 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-scripts\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.647580 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7rd\" (UniqueName: \"kubernetes.io/projected/18c181a4-0547-40d4-90ba-9a9bcf17fb86-kube-api-access-8s7rd\") pod \"ovn-controller-fpsz6-config-mjjdc\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.837801 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d36a4d-b732-4b75-9575-dfa9f94ce896" path="/var/lib/kubelet/pods/61d36a4d-b732-4b75-9575-dfa9f94ce896/volumes" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.891344 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.956721 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-gwxdf"] Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.958616 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:17 crc kubenswrapper[4982]: I0123 09:37:17.965105 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.020827 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-gwxdf"] Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.039330 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2742744-be53-4ca6-98a9-3a9b541ba0b8-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-gwxdf\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.039383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2742744-be53-4ca6-98a9-3a9b541ba0b8-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-gwxdf\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.069658 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-drt6w"] Jan 23 09:37:18 crc kubenswrapper[4982]: W0123 09:37:18.118763 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedc4b282_8200_4c45_8216_3be60dceea0e.slice/crio-d4f17eeaff8b3f0d857a1177abd82bb4535b41ebfb8da3cd62203a64722ab433 WatchSource:0}: Error finding container d4f17eeaff8b3f0d857a1177abd82bb4535b41ebfb8da3cd62203a64722ab433: Status 404 returned error can't find the container with id d4f17eeaff8b3f0d857a1177abd82bb4535b41ebfb8da3cd62203a64722ab433 Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.142197 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2742744-be53-4ca6-98a9-3a9b541ba0b8-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-gwxdf\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.142252 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2742744-be53-4ca6-98a9-3a9b541ba0b8-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-gwxdf\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.142913 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2742744-be53-4ca6-98a9-3a9b541ba0b8-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-gwxdf\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.154185 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2742744-be53-4ca6-98a9-3a9b541ba0b8-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-gwxdf\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.155541 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-drt6w"] Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.293785 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.532886 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fpsz6-config-mjjdc"] Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.828876 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-gwxdf"] Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.851827 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-drt6w" event={"ID":"edc4b282-8200-4c45-8216-3be60dceea0e","Type":"ContainerStarted","Data":"d4f17eeaff8b3f0d857a1177abd82bb4535b41ebfb8da3cd62203a64722ab433"} Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.852783 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6-config-mjjdc" event={"ID":"18c181a4-0547-40d4-90ba-9a9bcf17fb86","Type":"ContainerStarted","Data":"20bd86695cfd8e03327912a54090a1944f342330c9f912cf1fbde59e94605a50"} Jan 23 09:37:18 crc kubenswrapper[4982]: I0123 09:37:18.853708 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" event={"ID":"f2742744-be53-4ca6-98a9-3a9b541ba0b8","Type":"ContainerStarted","Data":"47417a377ee1739316e3c2879aef6da3500ff735033d9f608b733a949e36ea69"} Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.230311 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-pbztw"] Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.232505 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.237246 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.265033 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-pbztw"] Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.294686 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-scripts\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.294759 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data-merged\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.294899 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-combined-ca-bundle\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.295297 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.398066 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.398165 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-scripts\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.398223 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data-merged\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.398278 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-combined-ca-bundle\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.398903 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data-merged\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.405086 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-combined-ca-bundle\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.405599 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.418127 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-scripts\") pod \"octavia-db-sync-pbztw\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.573324 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.877592 4982 generic.go:334] "Generic (PLEG): container finished" podID="18c181a4-0547-40d4-90ba-9a9bcf17fb86" containerID="5a719dda35d6b122533b733e91f9a2ee46fc9f44d8ee810ef7f86f4feeefc40d" exitCode=0 Jan 23 09:37:19 crc kubenswrapper[4982]: I0123 09:37:19.877802 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6-config-mjjdc" event={"ID":"18c181a4-0547-40d4-90ba-9a9bcf17fb86","Type":"ContainerDied","Data":"5a719dda35d6b122533b733e91f9a2ee46fc9f44d8ee810ef7f86f4feeefc40d"} Jan 23 09:37:20 crc kubenswrapper[4982]: I0123 09:37:20.519654 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-pbztw"] Jan 23 09:37:20 crc kubenswrapper[4982]: W0123 09:37:20.523185 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8a87ff_e284_4a46_9236_b0a0d530a474.slice/crio-464b2be0a6ad545d7c010725669a9e6957a0bad4725230d613b1017c9e8ab5e4 WatchSource:0}: Error finding container 464b2be0a6ad545d7c010725669a9e6957a0bad4725230d613b1017c9e8ab5e4: Status 404 returned error can't find the container with id 464b2be0a6ad545d7c010725669a9e6957a0bad4725230d613b1017c9e8ab5e4 Jan 23 09:37:20 crc kubenswrapper[4982]: I0123 09:37:20.894292 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-drt6w" event={"ID":"edc4b282-8200-4c45-8216-3be60dceea0e","Type":"ContainerStarted","Data":"3b29a7b1abb28a9dd5f6771228966e477d1ca7989c9a0c1ffb6e453bfd25f869"} Jan 23 09:37:20 crc kubenswrapper[4982]: I0123 09:37:20.897903 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pbztw" event={"ID":"7f8a87ff-e284-4a46-9236-b0a0d530a474","Type":"ContainerStarted","Data":"b1ecdafc6682595a2fdeb08cdf02965e107d4cd97035f2bf2478a4e9800bd027"} Jan 23 09:37:20 crc kubenswrapper[4982]: I0123 09:37:20.897951 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pbztw" event={"ID":"7f8a87ff-e284-4a46-9236-b0a0d530a474","Type":"ContainerStarted","Data":"464b2be0a6ad545d7c010725669a9e6957a0bad4725230d613b1017c9e8ab5e4"} Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.814993 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.921186 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fpsz6-config-mjjdc" event={"ID":"18c181a4-0547-40d4-90ba-9a9bcf17fb86","Type":"ContainerDied","Data":"20bd86695cfd8e03327912a54090a1944f342330c9f912cf1fbde59e94605a50"} Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.921228 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20bd86695cfd8e03327912a54090a1944f342330c9f912cf1fbde59e94605a50" Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.921246 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fpsz6-config-mjjdc" Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.924578 4982 generic.go:334] "Generic (PLEG): container finished" podID="7f8a87ff-e284-4a46-9236-b0a0d530a474" containerID="b1ecdafc6682595a2fdeb08cdf02965e107d4cd97035f2bf2478a4e9800bd027" exitCode=0 Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.924792 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pbztw" event={"ID":"7f8a87ff-e284-4a46-9236-b0a0d530a474","Type":"ContainerDied","Data":"b1ecdafc6682595a2fdeb08cdf02965e107d4cd97035f2bf2478a4e9800bd027"} Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.967165 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-scripts\") pod \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.967250 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-additional-scripts\") pod \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.967330 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run\") pod \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.967408 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run-ovn\") pod \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.967499 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-log-ovn\") pod \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.967589 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s7rd\" (UniqueName: \"kubernetes.io/projected/18c181a4-0547-40d4-90ba-9a9bcf17fb86-kube-api-access-8s7rd\") pod \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\" (UID: \"18c181a4-0547-40d4-90ba-9a9bcf17fb86\") " Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.971885 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run" (OuterVolumeSpecName: "var-run") pod "18c181a4-0547-40d4-90ba-9a9bcf17fb86" (UID: "18c181a4-0547-40d4-90ba-9a9bcf17fb86"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.971999 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "18c181a4-0547-40d4-90ba-9a9bcf17fb86" (UID: "18c181a4-0547-40d4-90ba-9a9bcf17fb86"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.972047 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "18c181a4-0547-40d4-90ba-9a9bcf17fb86" (UID: "18c181a4-0547-40d4-90ba-9a9bcf17fb86"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.972162 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "18c181a4-0547-40d4-90ba-9a9bcf17fb86" (UID: "18c181a4-0547-40d4-90ba-9a9bcf17fb86"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:37:21 crc kubenswrapper[4982]: I0123 09:37:21.978550 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-scripts" (OuterVolumeSpecName: "scripts") pod "18c181a4-0547-40d4-90ba-9a9bcf17fb86" (UID: "18c181a4-0547-40d4-90ba-9a9bcf17fb86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.014444 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c181a4-0547-40d4-90ba-9a9bcf17fb86-kube-api-access-8s7rd" (OuterVolumeSpecName: "kube-api-access-8s7rd") pod "18c181a4-0547-40d4-90ba-9a9bcf17fb86" (UID: "18c181a4-0547-40d4-90ba-9a9bcf17fb86"). InnerVolumeSpecName "kube-api-access-8s7rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.072777 4982 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.072945 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s7rd\" (UniqueName: \"kubernetes.io/projected/18c181a4-0547-40d4-90ba-9a9bcf17fb86-kube-api-access-8s7rd\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.073614 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.073663 4982 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18c181a4-0547-40d4-90ba-9a9bcf17fb86-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.073676 4982 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.073685 4982 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c181a4-0547-40d4-90ba-9a9bcf17fb86-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.896099 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fpsz6-config-mjjdc"] Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.908041 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fpsz6-config-mjjdc"] Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.943188 4982 generic.go:334] "Generic (PLEG): container finished" podID="edc4b282-8200-4c45-8216-3be60dceea0e" containerID="3b29a7b1abb28a9dd5f6771228966e477d1ca7989c9a0c1ffb6e453bfd25f869" exitCode=0 Jan 23 09:37:22 crc kubenswrapper[4982]: I0123 09:37:22.943275 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-drt6w" event={"ID":"edc4b282-8200-4c45-8216-3be60dceea0e","Type":"ContainerDied","Data":"3b29a7b1abb28a9dd5f6771228966e477d1ca7989c9a0c1ffb6e453bfd25f869"} Jan 23 09:37:23 crc kubenswrapper[4982]: I0123 09:37:23.846169 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c181a4-0547-40d4-90ba-9a9bcf17fb86" path="/var/lib/kubelet/pods/18c181a4-0547-40d4-90ba-9a9bcf17fb86/volumes" Jan 23 09:37:23 crc kubenswrapper[4982]: I0123 09:37:23.960920 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pbztw" event={"ID":"7f8a87ff-e284-4a46-9236-b0a0d530a474","Type":"ContainerStarted","Data":"a6dcf5bdabf0bd21feaeaf885163339654890598b7aa0d97ba2ac94be3d34d7e"} Jan 23 09:37:23 crc kubenswrapper[4982]: I0123 09:37:23.989039 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-pbztw" podStartSLOduration=4.989018001 podStartE2EDuration="4.989018001s" podCreationTimestamp="2026-01-23 09:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:37:23.98375257 +0000 UTC m=+6118.464115956" watchObservedRunningTime="2026-01-23 09:37:23.989018001 +0000 UTC m=+6118.469381387" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.169584 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-bjmrt"] Jan 23 09:37:28 crc kubenswrapper[4982]: E0123 09:37:28.171950 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c181a4-0547-40d4-90ba-9a9bcf17fb86" containerName="ovn-config" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.171967 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c181a4-0547-40d4-90ba-9a9bcf17fb86" containerName="ovn-config" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.172441 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c181a4-0547-40d4-90ba-9a9bcf17fb86" containerName="ovn-config" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.174019 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.177225 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.181994 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-bjmrt"] Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.183719 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.188858 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.326919 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-scripts\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.326986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-config-data\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.327015 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/816268cf-a757-484e-83df-b8401a8e03b3-hm-ports\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.327031 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-amphora-certs\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.327357 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-combined-ca-bundle\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.327543 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/816268cf-a757-484e-83df-b8401a8e03b3-config-data-merged\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.430190 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-combined-ca-bundle\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.430276 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/816268cf-a757-484e-83df-b8401a8e03b3-config-data-merged\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.430442 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-scripts\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.430492 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-config-data\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.430525 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/816268cf-a757-484e-83df-b8401a8e03b3-hm-ports\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.430551 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-amphora-certs\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.431333 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/816268cf-a757-484e-83df-b8401a8e03b3-config-data-merged\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.431842 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/816268cf-a757-484e-83df-b8401a8e03b3-hm-ports\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.432784 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.434427 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.434503 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.448499 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-combined-ca-bundle\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.454261 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-scripts\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.453258 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-amphora-certs\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.456664 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/816268cf-a757-484e-83df-b8401a8e03b3-config-data\") pod \"octavia-healthmanager-bjmrt\" (UID: \"816268cf-a757-484e-83df-b8401a8e03b3\") " pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:28 crc kubenswrapper[4982]: I0123 09:37:28.503678 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:29 crc kubenswrapper[4982]: I0123 09:37:29.217071 4982 generic.go:334] "Generic (PLEG): container finished" podID="7f8a87ff-e284-4a46-9236-b0a0d530a474" containerID="a6dcf5bdabf0bd21feaeaf885163339654890598b7aa0d97ba2ac94be3d34d7e" exitCode=0 Jan 23 09:37:29 crc kubenswrapper[4982]: I0123 09:37:29.217142 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pbztw" event={"ID":"7f8a87ff-e284-4a46-9236-b0a0d530a474","Type":"ContainerDied","Data":"a6dcf5bdabf0bd21feaeaf885163339654890598b7aa0d97ba2ac94be3d34d7e"} Jan 23 09:37:30 crc kubenswrapper[4982]: I0123 09:37:30.870989 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.003861 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data-merged\") pod \"7f8a87ff-e284-4a46-9236-b0a0d530a474\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.004352 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data\") pod \"7f8a87ff-e284-4a46-9236-b0a0d530a474\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.004469 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-combined-ca-bundle\") pod \"7f8a87ff-e284-4a46-9236-b0a0d530a474\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.004692 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-scripts\") pod \"7f8a87ff-e284-4a46-9236-b0a0d530a474\" (UID: \"7f8a87ff-e284-4a46-9236-b0a0d530a474\") " Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.010425 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data" (OuterVolumeSpecName: "config-data") pod "7f8a87ff-e284-4a46-9236-b0a0d530a474" (UID: "7f8a87ff-e284-4a46-9236-b0a0d530a474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.010553 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-scripts" (OuterVolumeSpecName: "scripts") pod "7f8a87ff-e284-4a46-9236-b0a0d530a474" (UID: "7f8a87ff-e284-4a46-9236-b0a0d530a474"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.046135 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f8a87ff-e284-4a46-9236-b0a0d530a474" (UID: "7f8a87ff-e284-4a46-9236-b0a0d530a474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.046670 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "7f8a87ff-e284-4a46-9236-b0a0d530a474" (UID: "7f8a87ff-e284-4a46-9236-b0a0d530a474"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.112733 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.112764 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.112774 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.112782 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8a87ff-e284-4a46-9236-b0a0d530a474-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.242271 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-drt6w" event={"ID":"edc4b282-8200-4c45-8216-3be60dceea0e","Type":"ContainerStarted","Data":"cbbbfebe23927d7446ee2a47ce1846d17807b77fa78265487faab988eef837df"} Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.242951 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.247119 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" event={"ID":"f2742744-be53-4ca6-98a9-3a9b541ba0b8","Type":"ContainerStarted","Data":"35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0"} Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.250102 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-pbztw" event={"ID":"7f8a87ff-e284-4a46-9236-b0a0d530a474","Type":"ContainerDied","Data":"464b2be0a6ad545d7c010725669a9e6957a0bad4725230d613b1017c9e8ab5e4"} Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.250140 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464b2be0a6ad545d7c010725669a9e6957a0bad4725230d613b1017c9e8ab5e4" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.250201 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-pbztw" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.267737 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-bjmrt"] Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.278062 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-drt6w" podStartSLOduration=1.739112176 podStartE2EDuration="14.278044785s" podCreationTimestamp="2026-01-23 09:37:17 +0000 UTC" firstStartedPulling="2026-01-23 09:37:18.146411368 +0000 UTC m=+6112.626774754" lastFinishedPulling="2026-01-23 09:37:30.685343977 +0000 UTC m=+6125.165707363" observedRunningTime="2026-01-23 09:37:31.263390291 +0000 UTC m=+6125.743753677" watchObservedRunningTime="2026-01-23 09:37:31.278044785 +0000 UTC m=+6125.758408171" Jan 23 09:37:31 crc kubenswrapper[4982]: W0123 09:37:31.278415 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod816268cf_a757_484e_83df_b8401a8e03b3.slice/crio-a6b1ceaf76cf13303c25e8285c13e6e6dd2af66abd3cb596f11dcf65eb87658a WatchSource:0}: Error finding container a6b1ceaf76cf13303c25e8285c13e6e6dd2af66abd3cb596f11dcf65eb87658a: Status 404 returned error can't find the container with id a6b1ceaf76cf13303c25e8285c13e6e6dd2af66abd3cb596f11dcf65eb87658a Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.896558 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-7dd5cd75f7-rrmc7"] Jan 23 09:37:31 crc kubenswrapper[4982]: E0123 09:37:31.897473 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8a87ff-e284-4a46-9236-b0a0d530a474" containerName="octavia-db-sync" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.897488 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8a87ff-e284-4a46-9236-b0a0d530a474" containerName="octavia-db-sync" Jan 23 09:37:31 crc kubenswrapper[4982]: E0123 09:37:31.897513 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8a87ff-e284-4a46-9236-b0a0d530a474" containerName="init" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.897519 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8a87ff-e284-4a46-9236-b0a0d530a474" containerName="init" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.897741 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8a87ff-e284-4a46-9236-b0a0d530a474" containerName="octavia-db-sync" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.899434 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.902733 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.903066 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Jan 23 09:37:31 crc kubenswrapper[4982]: I0123 09:37:31.930189 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7dd5cd75f7-rrmc7"] Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.031552 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c4cdb462-46e9-49e7-b539-992fcf1d6e67-config-data-merged\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.031868 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/c4cdb462-46e9-49e7-b539-992fcf1d6e67-octavia-run\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.032114 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-scripts\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.032189 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-public-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.032265 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-ovndb-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.032477 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-config-data\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.032519 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-internal-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.032560 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-combined-ca-bundle\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.134903 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/c4cdb462-46e9-49e7-b539-992fcf1d6e67-octavia-run\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135011 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-scripts\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135045 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-public-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135081 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-ovndb-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135163 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-config-data\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135186 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-internal-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135209 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-combined-ca-bundle\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135241 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c4cdb462-46e9-49e7-b539-992fcf1d6e67-config-data-merged\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135759 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/c4cdb462-46e9-49e7-b539-992fcf1d6e67-octavia-run\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.135806 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/c4cdb462-46e9-49e7-b539-992fcf1d6e67-config-data-merged\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.141466 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-ovndb-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.143339 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-config-data\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.143495 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-combined-ca-bundle\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.151790 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-public-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.155903 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-internal-tls-certs\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.162335 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4cdb462-46e9-49e7-b539-992fcf1d6e67-scripts\") pod \"octavia-api-7dd5cd75f7-rrmc7\" (UID: \"c4cdb462-46e9-49e7-b539-992fcf1d6e67\") " pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.218872 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.263184 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bjmrt" event={"ID":"816268cf-a757-484e-83df-b8401a8e03b3","Type":"ContainerStarted","Data":"f6b6dc55041419760b42aeaa563eac1456434bed7bf4a19aee4b1c61a48fdf96"} Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.263387 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bjmrt" event={"ID":"816268cf-a757-484e-83df-b8401a8e03b3","Type":"ContainerStarted","Data":"a6b1ceaf76cf13303c25e8285c13e6e6dd2af66abd3cb596f11dcf65eb87658a"} Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.266903 4982 generic.go:334] "Generic (PLEG): container finished" podID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerID="35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0" exitCode=0 Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.267082 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" event={"ID":"f2742744-be53-4ca6-98a9-3a9b541ba0b8","Type":"ContainerDied","Data":"35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0"} Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.727368 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-7dd5cd75f7-rrmc7"] Jan 23 09:37:32 crc kubenswrapper[4982]: W0123 09:37:32.732736 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4cdb462_46e9_49e7_b539_992fcf1d6e67.slice/crio-1d537db772d5342a022f24f8481e97e0f48fc510f7144b67268bf8b984672f7c WatchSource:0}: Error finding container 1d537db772d5342a022f24f8481e97e0f48fc510f7144b67268bf8b984672f7c: Status 404 returned error can't find the container with id 1d537db772d5342a022f24f8481e97e0f48fc510f7144b67268bf8b984672f7c Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.864558 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-6q5jz"] Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.866615 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.871968 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.876122 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.882108 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-6q5jz"] Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.955483 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-amphora-certs\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.955533 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5340823e-9b64-4b44-83a6-f42c593d5bea-hm-ports\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.955586 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-combined-ca-bundle\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.955645 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-config-data\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.955670 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-scripts\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:32 crc kubenswrapper[4982]: I0123 09:37:32.955738 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5340823e-9b64-4b44-83a6-f42c593d5bea-config-data-merged\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.057800 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5340823e-9b64-4b44-83a6-f42c593d5bea-hm-ports\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.057876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-combined-ca-bundle\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.057914 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-config-data\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.057936 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-scripts\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.057991 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5340823e-9b64-4b44-83a6-f42c593d5bea-config-data-merged\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.058085 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-amphora-certs\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.062702 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/5340823e-9b64-4b44-83a6-f42c593d5bea-hm-ports\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.065132 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5340823e-9b64-4b44-83a6-f42c593d5bea-config-data-merged\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.078694 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-amphora-certs\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.078804 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-scripts\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.083578 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-config-data\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.087681 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5340823e-9b64-4b44-83a6-f42c593d5bea-combined-ca-bundle\") pod \"octavia-housekeeping-6q5jz\" (UID: \"5340823e-9b64-4b44-83a6-f42c593d5bea\") " pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.195324 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.282695 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" event={"ID":"f2742744-be53-4ca6-98a9-3a9b541ba0b8","Type":"ContainerStarted","Data":"4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978"} Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.288692 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" event={"ID":"c4cdb462-46e9-49e7-b539-992fcf1d6e67","Type":"ContainerStarted","Data":"8cc7eb73d893e8e208c1e30abab58f2f4b8644beb604bdf5ce7855f9fe7500a9"} Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.288730 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" event={"ID":"c4cdb462-46e9-49e7-b539-992fcf1d6e67","Type":"ContainerStarted","Data":"1d537db772d5342a022f24f8481e97e0f48fc510f7144b67268bf8b984672f7c"} Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.311414 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" podStartSLOduration=4.287005883 podStartE2EDuration="16.311391079s" podCreationTimestamp="2026-01-23 09:37:17 +0000 UTC" firstStartedPulling="2026-01-23 09:37:18.832582695 +0000 UTC m=+6113.312946081" lastFinishedPulling="2026-01-23 09:37:30.856967891 +0000 UTC m=+6125.337331277" observedRunningTime="2026-01-23 09:37:33.298306367 +0000 UTC m=+6127.778669753" watchObservedRunningTime="2026-01-23 09:37:33.311391079 +0000 UTC m=+6127.791754485" Jan 23 09:37:33 crc kubenswrapper[4982]: I0123 09:37:33.769316 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-6q5jz"] Jan 23 09:37:33 crc kubenswrapper[4982]: W0123 09:37:33.791009 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5340823e_9b64_4b44_83a6_f42c593d5bea.slice/crio-2b3a0995a5f3bf1f5d49298688de503163073f3588d5456edf8f3d107f8908f7 WatchSource:0}: Error finding container 2b3a0995a5f3bf1f5d49298688de503163073f3588d5456edf8f3d107f8908f7: Status 404 returned error can't find the container with id 2b3a0995a5f3bf1f5d49298688de503163073f3588d5456edf8f3d107f8908f7 Jan 23 09:37:34 crc kubenswrapper[4982]: I0123 09:37:34.301161 4982 generic.go:334] "Generic (PLEG): container finished" podID="c4cdb462-46e9-49e7-b539-992fcf1d6e67" containerID="8cc7eb73d893e8e208c1e30abab58f2f4b8644beb604bdf5ce7855f9fe7500a9" exitCode=0 Jan 23 09:37:34 crc kubenswrapper[4982]: I0123 09:37:34.301235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" event={"ID":"c4cdb462-46e9-49e7-b539-992fcf1d6e67","Type":"ContainerDied","Data":"8cc7eb73d893e8e208c1e30abab58f2f4b8644beb604bdf5ce7855f9fe7500a9"} Jan 23 09:37:34 crc kubenswrapper[4982]: I0123 09:37:34.304101 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6q5jz" event={"ID":"5340823e-9b64-4b44-83a6-f42c593d5bea","Type":"ContainerStarted","Data":"2b3a0995a5f3bf1f5d49298688de503163073f3588d5456edf8f3d107f8908f7"} Jan 23 09:37:34 crc kubenswrapper[4982]: I0123 09:37:34.306741 4982 generic.go:334] "Generic (PLEG): container finished" podID="816268cf-a757-484e-83df-b8401a8e03b3" containerID="f6b6dc55041419760b42aeaa563eac1456434bed7bf4a19aee4b1c61a48fdf96" exitCode=0 Jan 23 09:37:34 crc kubenswrapper[4982]: I0123 09:37:34.307661 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bjmrt" event={"ID":"816268cf-a757-484e-83df-b8401a8e03b3","Type":"ContainerDied","Data":"f6b6dc55041419760b42aeaa563eac1456434bed7bf4a19aee4b1c61a48fdf96"} Jan 23 09:37:35 crc kubenswrapper[4982]: I0123 09:37:35.329446 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-bjmrt" event={"ID":"816268cf-a757-484e-83df-b8401a8e03b3","Type":"ContainerStarted","Data":"193b147e6e0eff1274eb08a234db1710a22e62449ecb8a51e43711ca9e3a2c67"} Jan 23 09:37:35 crc kubenswrapper[4982]: I0123 09:37:35.329923 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:35 crc kubenswrapper[4982]: I0123 09:37:35.334653 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" event={"ID":"c4cdb462-46e9-49e7-b539-992fcf1d6e67","Type":"ContainerStarted","Data":"f69a5ee71651c002424074a81691a61f829e3a3a5897dc364138654ae2736086"} Jan 23 09:37:35 crc kubenswrapper[4982]: I0123 09:37:35.360311 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-bjmrt" podStartSLOduration=7.360291041 podStartE2EDuration="7.360291041s" podCreationTimestamp="2026-01-23 09:37:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:37:35.351238437 +0000 UTC m=+6129.831601823" watchObservedRunningTime="2026-01-23 09:37:35.360291041 +0000 UTC m=+6129.840654427" Jan 23 09:37:36 crc kubenswrapper[4982]: I0123 09:37:36.362943 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" event={"ID":"c4cdb462-46e9-49e7-b539-992fcf1d6e67","Type":"ContainerStarted","Data":"e0ac1c27668e972e3cb62c4a60e80b56fad0b635ed187ad16a60b2288618d7fc"} Jan 23 09:37:36 crc kubenswrapper[4982]: I0123 09:37:36.363313 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:36 crc kubenswrapper[4982]: I0123 09:37:36.363362 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:36 crc kubenswrapper[4982]: I0123 09:37:36.391778 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" podStartSLOduration=5.391757532 podStartE2EDuration="5.391757532s" podCreationTimestamp="2026-01-23 09:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:37:36.390607841 +0000 UTC m=+6130.870971217" watchObservedRunningTime="2026-01-23 09:37:36.391757532 +0000 UTC m=+6130.872120918" Jan 23 09:37:38 crc kubenswrapper[4982]: I0123 09:37:38.385779 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6q5jz" event={"ID":"5340823e-9b64-4b44-83a6-f42c593d5bea","Type":"ContainerStarted","Data":"42d64172e308c02244e6cdc196df1df815a601a5633e61f128691445b7473de9"} Jan 23 09:37:40 crc kubenswrapper[4982]: I0123 09:37:40.428269 4982 generic.go:334] "Generic (PLEG): container finished" podID="5340823e-9b64-4b44-83a6-f42c593d5bea" containerID="42d64172e308c02244e6cdc196df1df815a601a5633e61f128691445b7473de9" exitCode=0 Jan 23 09:37:40 crc kubenswrapper[4982]: I0123 09:37:40.428370 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6q5jz" event={"ID":"5340823e-9b64-4b44-83a6-f42c593d5bea","Type":"ContainerDied","Data":"42d64172e308c02244e6cdc196df1df815a601a5633e61f128691445b7473de9"} Jan 23 09:37:41 crc kubenswrapper[4982]: I0123 09:37:41.439355 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-6q5jz" event={"ID":"5340823e-9b64-4b44-83a6-f42c593d5bea","Type":"ContainerStarted","Data":"7924a0b32f10c1fc9bd5ffa9fc11e4d020bdbea0f487ab6c0f4213c0dd71ff7a"} Jan 23 09:37:41 crc kubenswrapper[4982]: I0123 09:37:41.439926 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:41 crc kubenswrapper[4982]: I0123 09:37:41.467344 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-6q5jz" podStartSLOduration=7.164359172 podStartE2EDuration="9.467322569s" podCreationTimestamp="2026-01-23 09:37:32 +0000 UTC" firstStartedPulling="2026-01-23 09:37:33.793095518 +0000 UTC m=+6128.273458904" lastFinishedPulling="2026-01-23 09:37:36.096058895 +0000 UTC m=+6130.576422301" observedRunningTime="2026-01-23 09:37:41.458236854 +0000 UTC m=+6135.938600250" watchObservedRunningTime="2026-01-23 09:37:41.467322569 +0000 UTC m=+6135.947685955" Jan 23 09:37:43 crc kubenswrapper[4982]: I0123 09:37:43.544245 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-bjmrt" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.457419 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tg2hg"] Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.460282 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.473834 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tg2hg"] Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.517041 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcvq4\" (UniqueName: \"kubernetes.io/projected/e8aa702b-3863-4551-977a-ce81725132cb-kube-api-access-gcvq4\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.517269 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-catalog-content\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.517372 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-utilities\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.532590 4982 scope.go:117] "RemoveContainer" containerID="2756e5e50b110a99642009925c601d7400ae09ca34eab3625b144d8f7e5627ff" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.620407 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-catalog-content\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.620817 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-catalog-content\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.620886 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-utilities\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.620987 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcvq4\" (UniqueName: \"kubernetes.io/projected/e8aa702b-3863-4551-977a-ce81725132cb-kube-api-access-gcvq4\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.621122 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-utilities\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.646591 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcvq4\" (UniqueName: \"kubernetes.io/projected/e8aa702b-3863-4551-977a-ce81725132cb-kube-api-access-gcvq4\") pod \"redhat-operators-tg2hg\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:44 crc kubenswrapper[4982]: I0123 09:37:44.778168 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:45 crc kubenswrapper[4982]: I0123 09:37:45.290203 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tg2hg"] Jan 23 09:37:45 crc kubenswrapper[4982]: W0123 09:37:45.295103 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8aa702b_3863_4551_977a_ce81725132cb.slice/crio-67291185d13122825853536bb1130c9b338a4632f9330d050627197020161989 WatchSource:0}: Error finding container 67291185d13122825853536bb1130c9b338a4632f9330d050627197020161989: Status 404 returned error can't find the container with id 67291185d13122825853536bb1130c9b338a4632f9330d050627197020161989 Jan 23 09:37:45 crc kubenswrapper[4982]: I0123 09:37:45.486540 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tg2hg" event={"ID":"e8aa702b-3863-4551-977a-ce81725132cb","Type":"ContainerStarted","Data":"67291185d13122825853536bb1130c9b338a4632f9330d050627197020161989"} Jan 23 09:37:46 crc kubenswrapper[4982]: I0123 09:37:46.504940 4982 generic.go:334] "Generic (PLEG): container finished" podID="e8aa702b-3863-4551-977a-ce81725132cb" containerID="a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288" exitCode=0 Jan 23 09:37:46 crc kubenswrapper[4982]: I0123 09:37:46.505003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tg2hg" event={"ID":"e8aa702b-3863-4551-977a-ce81725132cb","Type":"ContainerDied","Data":"a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288"} Jan 23 09:37:47 crc kubenswrapper[4982]: I0123 09:37:47.437604 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-drt6w" Jan 23 09:37:47 crc kubenswrapper[4982]: I0123 09:37:47.520800 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tg2hg" event={"ID":"e8aa702b-3863-4551-977a-ce81725132cb","Type":"ContainerStarted","Data":"ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11"} Jan 23 09:37:48 crc kubenswrapper[4982]: I0123 09:37:48.269372 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-6q5jz" Jan 23 09:37:51 crc kubenswrapper[4982]: I0123 09:37:51.582780 4982 generic.go:334] "Generic (PLEG): container finished" podID="e8aa702b-3863-4551-977a-ce81725132cb" containerID="ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11" exitCode=0 Jan 23 09:37:51 crc kubenswrapper[4982]: I0123 09:37:51.582927 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tg2hg" event={"ID":"e8aa702b-3863-4551-977a-ce81725132cb","Type":"ContainerDied","Data":"ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11"} Jan 23 09:37:51 crc kubenswrapper[4982]: I0123 09:37:51.952248 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:51 crc kubenswrapper[4982]: I0123 09:37:51.957762 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-7dd5cd75f7-rrmc7" Jan 23 09:37:52 crc kubenswrapper[4982]: I0123 09:37:52.069409 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6c84888f8c-rf6pd"] Jan 23 09:37:52 crc kubenswrapper[4982]: I0123 09:37:52.070362 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6c84888f8c-rf6pd" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api" containerID="cri-o://6257371c3cf6cbe7e0e138c5eb843e130e1a3e4167df86b6b6b3e2a3628c327e" gracePeriod=30 Jan 23 09:37:52 crc kubenswrapper[4982]: I0123 09:37:52.071053 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6c84888f8c-rf6pd" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api-provider-agent" containerID="cri-o://8135f62e3522ac8c609f4b299b47a5b0614b5de6703c04242121010146101b8d" gracePeriod=30 Jan 23 09:37:52 crc kubenswrapper[4982]: I0123 09:37:52.595170 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tg2hg" event={"ID":"e8aa702b-3863-4551-977a-ce81725132cb","Type":"ContainerStarted","Data":"3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53"} Jan 23 09:37:52 crc kubenswrapper[4982]: I0123 09:37:52.622819 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tg2hg" podStartSLOduration=3.07559963 podStartE2EDuration="8.622802875s" podCreationTimestamp="2026-01-23 09:37:44 +0000 UTC" firstStartedPulling="2026-01-23 09:37:46.5073692 +0000 UTC m=+6140.987732586" lastFinishedPulling="2026-01-23 09:37:52.054572445 +0000 UTC m=+6146.534935831" observedRunningTime="2026-01-23 09:37:52.616769012 +0000 UTC m=+6147.097132398" watchObservedRunningTime="2026-01-23 09:37:52.622802875 +0000 UTC m=+6147.103166261" Jan 23 09:37:53 crc kubenswrapper[4982]: I0123 09:37:53.608000 4982 generic.go:334] "Generic (PLEG): container finished" podID="5808496f-9e30-472e-85a0-edd30e941942" containerID="8135f62e3522ac8c609f4b299b47a5b0614b5de6703c04242121010146101b8d" exitCode=0 Jan 23 09:37:53 crc kubenswrapper[4982]: I0123 09:37:53.608074 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c84888f8c-rf6pd" event={"ID":"5808496f-9e30-472e-85a0-edd30e941942","Type":"ContainerDied","Data":"8135f62e3522ac8c609f4b299b47a5b0614b5de6703c04242121010146101b8d"} Jan 23 09:37:54 crc kubenswrapper[4982]: I0123 09:37:54.779017 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:54 crc kubenswrapper[4982]: I0123 09:37:54.780184 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.259505 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/octavia-api-6c84888f8c-rf6pd" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api-provider-agent" probeResult="failure" output="Get \"http://10.217.1.124:9876/healthcheck\": read tcp 10.217.0.2:54440->10.217.1.124:9876: read: connection reset by peer" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.259516 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/octavia-api-6c84888f8c-rf6pd" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api" probeResult="failure" output="Get \"http://10.217.1.124:9876/healthcheck\": read tcp 10.217.0.2:54446->10.217.1.124:9876: read: connection reset by peer" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.631117 4982 generic.go:334] "Generic (PLEG): container finished" podID="5808496f-9e30-472e-85a0-edd30e941942" containerID="6257371c3cf6cbe7e0e138c5eb843e130e1a3e4167df86b6b6b3e2a3628c327e" exitCode=0 Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.631919 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c84888f8c-rf6pd" event={"ID":"5808496f-9e30-472e-85a0-edd30e941942","Type":"ContainerDied","Data":"6257371c3cf6cbe7e0e138c5eb843e130e1a3e4167df86b6b6b3e2a3628c327e"} Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.632003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6c84888f8c-rf6pd" event={"ID":"5808496f-9e30-472e-85a0-edd30e941942","Type":"ContainerDied","Data":"bf099678755f7ad57abc5fbc3df4a103f377d98180cf11c05c36463be45fa249"} Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.632020 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf099678755f7ad57abc5fbc3df4a103f377d98180cf11c05c36463be45fa249" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.691552 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.825824 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tg2hg" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="registry-server" probeResult="failure" output=< Jan 23 09:37:55 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 09:37:55 crc kubenswrapper[4982]: > Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.836345 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-ovndb-tls-certs\") pod \"5808496f-9e30-472e-85a0-edd30e941942\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.836457 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-combined-ca-bundle\") pod \"5808496f-9e30-472e-85a0-edd30e941942\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.836515 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-config-data-merged\") pod \"5808496f-9e30-472e-85a0-edd30e941942\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.836584 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-scripts\") pod \"5808496f-9e30-472e-85a0-edd30e941942\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.836654 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-octavia-run\") pod \"5808496f-9e30-472e-85a0-edd30e941942\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.836745 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-config-data\") pod \"5808496f-9e30-472e-85a0-edd30e941942\" (UID: \"5808496f-9e30-472e-85a0-edd30e941942\") " Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.838551 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "5808496f-9e30-472e-85a0-edd30e941942" (UID: "5808496f-9e30-472e-85a0-edd30e941942"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.843923 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-config-data" (OuterVolumeSpecName: "config-data") pod "5808496f-9e30-472e-85a0-edd30e941942" (UID: "5808496f-9e30-472e-85a0-edd30e941942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.846191 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-scripts" (OuterVolumeSpecName: "scripts") pod "5808496f-9e30-472e-85a0-edd30e941942" (UID: "5808496f-9e30-472e-85a0-edd30e941942"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.907597 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "5808496f-9e30-472e-85a0-edd30e941942" (UID: "5808496f-9e30-472e-85a0-edd30e941942"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.913085 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5808496f-9e30-472e-85a0-edd30e941942" (UID: "5808496f-9e30-472e-85a0-edd30e941942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.939496 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.939537 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.939552 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.939567 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:55 crc kubenswrapper[4982]: I0123 09:37:55.939579 4982 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/5808496f-9e30-472e-85a0-edd30e941942-octavia-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:56 crc kubenswrapper[4982]: I0123 09:37:56.034583 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5808496f-9e30-472e-85a0-edd30e941942" (UID: "5808496f-9e30-472e-85a0-edd30e941942"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:37:56 crc kubenswrapper[4982]: I0123 09:37:56.041513 4982 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5808496f-9e30-472e-85a0-edd30e941942-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:37:56 crc kubenswrapper[4982]: I0123 09:37:56.640797 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6c84888f8c-rf6pd" Jan 23 09:37:56 crc kubenswrapper[4982]: I0123 09:37:56.673505 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6c84888f8c-rf6pd"] Jan 23 09:37:56 crc kubenswrapper[4982]: I0123 09:37:56.684541 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-6c84888f8c-rf6pd"] Jan 23 09:37:57 crc kubenswrapper[4982]: I0123 09:37:57.838936 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5808496f-9e30-472e-85a0-edd30e941942" path="/var/lib/kubelet/pods/5808496f-9e30-472e-85a0-edd30e941942/volumes" Jan 23 09:38:04 crc kubenswrapper[4982]: I0123 09:38:04.827241 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:38:04 crc kubenswrapper[4982]: I0123 09:38:04.881228 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:38:05 crc kubenswrapper[4982]: I0123 09:38:05.067415 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tg2hg"] Jan 23 09:38:06 crc kubenswrapper[4982]: I0123 09:38:06.750526 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tg2hg" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="registry-server" containerID="cri-o://3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53" gracePeriod=2 Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.288123 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.378405 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-utilities\") pod \"e8aa702b-3863-4551-977a-ce81725132cb\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.378587 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-catalog-content\") pod \"e8aa702b-3863-4551-977a-ce81725132cb\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.378771 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcvq4\" (UniqueName: \"kubernetes.io/projected/e8aa702b-3863-4551-977a-ce81725132cb-kube-api-access-gcvq4\") pod \"e8aa702b-3863-4551-977a-ce81725132cb\" (UID: \"e8aa702b-3863-4551-977a-ce81725132cb\") " Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.379552 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-utilities" (OuterVolumeSpecName: "utilities") pod "e8aa702b-3863-4551-977a-ce81725132cb" (UID: "e8aa702b-3863-4551-977a-ce81725132cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.400223 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8aa702b-3863-4551-977a-ce81725132cb-kube-api-access-gcvq4" (OuterVolumeSpecName: "kube-api-access-gcvq4") pod "e8aa702b-3863-4551-977a-ce81725132cb" (UID: "e8aa702b-3863-4551-977a-ce81725132cb"). InnerVolumeSpecName "kube-api-access-gcvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.495504 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.495588 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcvq4\" (UniqueName: \"kubernetes.io/projected/e8aa702b-3863-4551-977a-ce81725132cb-kube-api-access-gcvq4\") on node \"crc\" DevicePath \"\"" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.521296 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8aa702b-3863-4551-977a-ce81725132cb" (UID: "e8aa702b-3863-4551-977a-ce81725132cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.597917 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8aa702b-3863-4551-977a-ce81725132cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.767314 4982 generic.go:334] "Generic (PLEG): container finished" podID="e8aa702b-3863-4551-977a-ce81725132cb" containerID="3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53" exitCode=0 Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.767388 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tg2hg" event={"ID":"e8aa702b-3863-4551-977a-ce81725132cb","Type":"ContainerDied","Data":"3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53"} Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.767429 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tg2hg" event={"ID":"e8aa702b-3863-4551-977a-ce81725132cb","Type":"ContainerDied","Data":"67291185d13122825853536bb1130c9b338a4632f9330d050627197020161989"} Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.767456 4982 scope.go:117] "RemoveContainer" containerID="3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.767757 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tg2hg" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.799461 4982 scope.go:117] "RemoveContainer" containerID="ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.823210 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tg2hg"] Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.833682 4982 scope.go:117] "RemoveContainer" containerID="a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.842438 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tg2hg"] Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.877248 4982 scope.go:117] "RemoveContainer" containerID="3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53" Jan 23 09:38:07 crc kubenswrapper[4982]: E0123 09:38:07.877939 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53\": container with ID starting with 3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53 not found: ID does not exist" containerID="3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.877989 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53"} err="failed to get container status \"3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53\": rpc error: code = NotFound desc = could not find container \"3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53\": container with ID starting with 3a1f4912ff51aa9d9309ded82db1211796d1a2d4c2a19119e0437c8ea1b9fa53 not found: ID does not exist" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.878025 4982 scope.go:117] "RemoveContainer" containerID="ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11" Jan 23 09:38:07 crc kubenswrapper[4982]: E0123 09:38:07.878347 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11\": container with ID starting with ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11 not found: ID does not exist" containerID="ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.878467 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11"} err="failed to get container status \"ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11\": rpc error: code = NotFound desc = could not find container \"ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11\": container with ID starting with ba2c0bc3509a70160c476722cc79e6cf36da1abc15ef354201ca1c69e365ad11 not found: ID does not exist" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.878569 4982 scope.go:117] "RemoveContainer" containerID="a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288" Jan 23 09:38:07 crc kubenswrapper[4982]: E0123 09:38:07.878976 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288\": container with ID starting with a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288 not found: ID does not exist" containerID="a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288" Jan 23 09:38:07 crc kubenswrapper[4982]: I0123 09:38:07.879013 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288"} err="failed to get container status \"a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288\": rpc error: code = NotFound desc = could not find container \"a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288\": container with ID starting with a0d2e2aaccfe8e7e173ff9fd58de91f1ad490c584d41d79aeb8bf808e9d94288 not found: ID does not exist" Jan 23 09:38:09 crc kubenswrapper[4982]: I0123 09:38:09.841695 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8aa702b-3863-4551-977a-ce81725132cb" path="/var/lib/kubelet/pods/e8aa702b-3863-4551-977a-ce81725132cb/volumes" Jan 23 09:38:35 crc kubenswrapper[4982]: I0123 09:38:35.929572 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-bjmrt"] Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.840235 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-lrbkl"] Jan 23 09:38:37 crc kubenswrapper[4982]: E0123 09:38:37.841835 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="init" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.841862 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="init" Jan 23 09:38:37 crc kubenswrapper[4982]: E0123 09:38:37.841893 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="registry-server" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.841902 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="registry-server" Jan 23 09:38:37 crc kubenswrapper[4982]: E0123 09:38:37.841924 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.841933 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api" Jan 23 09:38:37 crc kubenswrapper[4982]: E0123 09:38:37.841946 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api-provider-agent" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.841954 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api-provider-agent" Jan 23 09:38:37 crc kubenswrapper[4982]: E0123 09:38:37.841985 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="extract-content" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.841993 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="extract-content" Jan 23 09:38:37 crc kubenswrapper[4982]: E0123 09:38:37.842004 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="extract-utilities" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.842011 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="extract-utilities" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.842308 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.842325 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5808496f-9e30-472e-85a0-edd30e941942" containerName="octavia-api-provider-agent" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.842342 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8aa702b-3863-4551-977a-ce81725132cb" containerName="registry-server" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.843747 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.847920 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.848652 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.858794 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-lrbkl"] Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.978564 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/df37afea-7f1b-411e-ad95-cac7051ac0c5-config-data-merged\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.979014 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-combined-ca-bundle\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.979296 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-scripts\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.979493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/df37afea-7f1b-411e-ad95-cac7051ac0c5-hm-ports\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.979529 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-amphora-certs\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:37 crc kubenswrapper[4982]: I0123 09:38:37.979714 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-config-data\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.081959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-config-data\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.082084 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/df37afea-7f1b-411e-ad95-cac7051ac0c5-config-data-merged\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.082131 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-combined-ca-bundle\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.082210 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-scripts\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.082283 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/df37afea-7f1b-411e-ad95-cac7051ac0c5-hm-ports\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.082301 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-amphora-certs\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.082768 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/df37afea-7f1b-411e-ad95-cac7051ac0c5-config-data-merged\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.083356 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/df37afea-7f1b-411e-ad95-cac7051ac0c5-hm-ports\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.087766 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-config-data\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.087904 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-amphora-certs\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.088232 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-scripts\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.089585 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df37afea-7f1b-411e-ad95-cac7051ac0c5-combined-ca-bundle\") pod \"octavia-worker-lrbkl\" (UID: \"df37afea-7f1b-411e-ad95-cac7051ac0c5\") " pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.163093 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:38 crc kubenswrapper[4982]: I0123 09:38:38.744821 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-lrbkl"] Jan 23 09:38:39 crc kubenswrapper[4982]: I0123 09:38:39.259778 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-lrbkl" event={"ID":"df37afea-7f1b-411e-ad95-cac7051ac0c5","Type":"ContainerStarted","Data":"c3ff71ecd9458f4cbd63cd39764392d8dbd338514fc7caec5f9b6f1632beb13b"} Jan 23 09:38:41 crc kubenswrapper[4982]: I0123 09:38:41.283312 4982 generic.go:334] "Generic (PLEG): container finished" podID="df37afea-7f1b-411e-ad95-cac7051ac0c5" containerID="b226845f2f6f32cd4766c92ba8162e8de1c54dfab2fdb1bc0fdee7c7dc17f9b7" exitCode=0 Jan 23 09:38:41 crc kubenswrapper[4982]: I0123 09:38:41.283443 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-lrbkl" event={"ID":"df37afea-7f1b-411e-ad95-cac7051ac0c5","Type":"ContainerDied","Data":"b226845f2f6f32cd4766c92ba8162e8de1c54dfab2fdb1bc0fdee7c7dc17f9b7"} Jan 23 09:38:41 crc kubenswrapper[4982]: I0123 09:38:41.435727 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:38:41 crc kubenswrapper[4982]: I0123 09:38:41.436080 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:38:42 crc kubenswrapper[4982]: I0123 09:38:42.296450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-lrbkl" event={"ID":"df37afea-7f1b-411e-ad95-cac7051ac0c5","Type":"ContainerStarted","Data":"dc9e48bc78b3f1749ae6336aab4e08324378a9c15b5cf7fe4a7c556a64906c28"} Jan 23 09:38:42 crc kubenswrapper[4982]: I0123 09:38:42.296796 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-lrbkl" Jan 23 09:38:42 crc kubenswrapper[4982]: I0123 09:38:42.321203 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-lrbkl" podStartSLOduration=4.104886719 podStartE2EDuration="5.321180019s" podCreationTimestamp="2026-01-23 09:38:37 +0000 UTC" firstStartedPulling="2026-01-23 09:38:38.75879078 +0000 UTC m=+6193.239154166" lastFinishedPulling="2026-01-23 09:38:39.97508408 +0000 UTC m=+6194.455447466" observedRunningTime="2026-01-23 09:38:42.313970485 +0000 UTC m=+6196.794333881" watchObservedRunningTime="2026-01-23 09:38:42.321180019 +0000 UTC m=+6196.801543405" Jan 23 09:38:53 crc kubenswrapper[4982]: I0123 09:38:53.199581 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-lrbkl" Jan 23 09:39:00 crc kubenswrapper[4982]: I0123 09:39:00.268381 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-gwxdf"] Jan 23 09:39:00 crc kubenswrapper[4982]: I0123 09:39:00.269146 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" podUID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerName="octavia-amphora-httpd" containerID="cri-o://4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978" gracePeriod=30 Jan 23 09:39:00 crc kubenswrapper[4982]: I0123 09:39:00.925431 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.002652 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2742744-be53-4ca6-98a9-3a9b541ba0b8-amphora-image\") pod \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.002916 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2742744-be53-4ca6-98a9-3a9b541ba0b8-httpd-config\") pod \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\" (UID: \"f2742744-be53-4ca6-98a9-3a9b541ba0b8\") " Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.059831 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2742744-be53-4ca6-98a9-3a9b541ba0b8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f2742744-be53-4ca6-98a9-3a9b541ba0b8" (UID: "f2742744-be53-4ca6-98a9-3a9b541ba0b8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.101601 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2742744-be53-4ca6-98a9-3a9b541ba0b8-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "f2742744-be53-4ca6-98a9-3a9b541ba0b8" (UID: "f2742744-be53-4ca6-98a9-3a9b541ba0b8"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.105882 4982 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f2742744-be53-4ca6-98a9-3a9b541ba0b8-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.105918 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f2742744-be53-4ca6-98a9-3a9b541ba0b8-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.516911 4982 generic.go:334] "Generic (PLEG): container finished" podID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerID="4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978" exitCode=0 Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.516982 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" event={"ID":"f2742744-be53-4ca6-98a9-3a9b541ba0b8","Type":"ContainerDied","Data":"4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978"} Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.517008 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" event={"ID":"f2742744-be53-4ca6-98a9-3a9b541ba0b8","Type":"ContainerDied","Data":"47417a377ee1739316e3c2879aef6da3500ff735033d9f608b733a949e36ea69"} Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.517025 4982 scope.go:117] "RemoveContainer" containerID="4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.517039 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-gwxdf" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.543075 4982 scope.go:117] "RemoveContainer" containerID="35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.579738 4982 scope.go:117] "RemoveContainer" containerID="4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978" Jan 23 09:39:01 crc kubenswrapper[4982]: E0123 09:39:01.580192 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978\": container with ID starting with 4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978 not found: ID does not exist" containerID="4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.580280 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978"} err="failed to get container status \"4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978\": rpc error: code = NotFound desc = could not find container \"4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978\": container with ID starting with 4baa8a2457aa78c83924ea68c609abcacde8a75fdfe30bae10e50bc2237c1978 not found: ID does not exist" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.580334 4982 scope.go:117] "RemoveContainer" containerID="35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0" Jan 23 09:39:01 crc kubenswrapper[4982]: E0123 09:39:01.580783 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0\": container with ID starting with 35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0 not found: ID does not exist" containerID="35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.580819 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0"} err="failed to get container status \"35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0\": rpc error: code = NotFound desc = could not find container \"35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0\": container with ID starting with 35b7340c235000c0a3d88cb28237516db077229c619c9f9ab6f53a517aedcde0 not found: ID does not exist" Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.654950 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-gwxdf"] Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.664400 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-gwxdf"] Jan 23 09:39:01 crc kubenswrapper[4982]: I0123 09:39:01.840033 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" path="/var/lib/kubelet/pods/f2742744-be53-4ca6-98a9-3a9b541ba0b8/volumes" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.623220 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-nlx7l"] Jan 23 09:39:05 crc kubenswrapper[4982]: E0123 09:39:05.624412 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerName="octavia-amphora-httpd" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.624432 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerName="octavia-amphora-httpd" Jan 23 09:39:05 crc kubenswrapper[4982]: E0123 09:39:05.624446 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerName="init" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.624454 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerName="init" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.624735 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2742744-be53-4ca6-98a9-3a9b541ba0b8" containerName="octavia-amphora-httpd" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.626405 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.634187 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-nlx7l"] Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.634858 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.806921 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/36fc9595-1ab2-42ec-ab65-f1e979daf0fd-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-nlx7l\" (UID: \"36fc9595-1ab2-42ec-ab65-f1e979daf0fd\") " pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.807036 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/36fc9595-1ab2-42ec-ab65-f1e979daf0fd-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-nlx7l\" (UID: \"36fc9595-1ab2-42ec-ab65-f1e979daf0fd\") " pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.909174 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/36fc9595-1ab2-42ec-ab65-f1e979daf0fd-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-nlx7l\" (UID: \"36fc9595-1ab2-42ec-ab65-f1e979daf0fd\") " pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.910553 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/36fc9595-1ab2-42ec-ab65-f1e979daf0fd-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-nlx7l\" (UID: \"36fc9595-1ab2-42ec-ab65-f1e979daf0fd\") " pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.911083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/36fc9595-1ab2-42ec-ab65-f1e979daf0fd-amphora-image\") pod \"octavia-image-upload-7b97d6bc64-nlx7l\" (UID: \"36fc9595-1ab2-42ec-ab65-f1e979daf0fd\") " pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.915425 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/36fc9595-1ab2-42ec-ab65-f1e979daf0fd-httpd-config\") pod \"octavia-image-upload-7b97d6bc64-nlx7l\" (UID: \"36fc9595-1ab2-42ec-ab65-f1e979daf0fd\") " pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:05 crc kubenswrapper[4982]: I0123 09:39:05.955813 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" Jan 23 09:39:06 crc kubenswrapper[4982]: I0123 09:39:06.440001 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-7b97d6bc64-nlx7l"] Jan 23 09:39:06 crc kubenswrapper[4982]: I0123 09:39:06.575285 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" event={"ID":"36fc9595-1ab2-42ec-ab65-f1e979daf0fd","Type":"ContainerStarted","Data":"23121b4a1aff12404602fc32d9ebebd20fc1571b72e4a920b6ef39b88ace2a4c"} Jan 23 09:39:07 crc kubenswrapper[4982]: I0123 09:39:07.587003 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" event={"ID":"36fc9595-1ab2-42ec-ab65-f1e979daf0fd","Type":"ContainerStarted","Data":"6acc4250f990b42db83cd7581dd65a7591bad31b6369bd5c95e37e168ff6424e"} Jan 23 09:39:08 crc kubenswrapper[4982]: I0123 09:39:08.599869 4982 generic.go:334] "Generic (PLEG): container finished" podID="36fc9595-1ab2-42ec-ab65-f1e979daf0fd" containerID="6acc4250f990b42db83cd7581dd65a7591bad31b6369bd5c95e37e168ff6424e" exitCode=0 Jan 23 09:39:08 crc kubenswrapper[4982]: I0123 09:39:08.599947 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" event={"ID":"36fc9595-1ab2-42ec-ab65-f1e979daf0fd","Type":"ContainerDied","Data":"6acc4250f990b42db83cd7581dd65a7591bad31b6369bd5c95e37e168ff6424e"} Jan 23 09:39:09 crc kubenswrapper[4982]: I0123 09:39:09.042162 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f0a6-account-create-update-hwtqd"] Jan 23 09:39:09 crc kubenswrapper[4982]: I0123 09:39:09.054580 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f0a6-account-create-update-hwtqd"] Jan 23 09:39:09 crc kubenswrapper[4982]: I0123 09:39:09.627836 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" event={"ID":"36fc9595-1ab2-42ec-ab65-f1e979daf0fd","Type":"ContainerStarted","Data":"5785f32e3b8f9411eed5d865a5bb25c8d86002d32747e17ba3996a17f7f44a9b"} Jan 23 09:39:09 crc kubenswrapper[4982]: I0123 09:39:09.842295 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab" path="/var/lib/kubelet/pods/3f8bdcde-75a9-4f3c-97bd-1946fa44a9ab/volumes" Jan 23 09:39:10 crc kubenswrapper[4982]: I0123 09:39:10.025293 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-7b97d6bc64-nlx7l" podStartSLOduration=4.254278263 podStartE2EDuration="5.025271425s" podCreationTimestamp="2026-01-23 09:39:05 +0000 UTC" firstStartedPulling="2026-01-23 09:39:06.452686411 +0000 UTC m=+6220.933049797" lastFinishedPulling="2026-01-23 09:39:07.223679573 +0000 UTC m=+6221.704042959" observedRunningTime="2026-01-23 09:39:09.651071143 +0000 UTC m=+6224.131434529" watchObservedRunningTime="2026-01-23 09:39:10.025271425 +0000 UTC m=+6224.505634811" Jan 23 09:39:10 crc kubenswrapper[4982]: I0123 09:39:10.032337 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hc5mz"] Jan 23 09:39:10 crc kubenswrapper[4982]: I0123 09:39:10.045562 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hc5mz"] Jan 23 09:39:11 crc kubenswrapper[4982]: I0123 09:39:11.435788 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:39:11 crc kubenswrapper[4982]: I0123 09:39:11.436095 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:39:11 crc kubenswrapper[4982]: I0123 09:39:11.841018 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746e2f37-b40c-43b9-b1ca-7f51b4c1eff4" path="/var/lib/kubelet/pods/746e2f37-b40c-43b9-b1ca-7f51b4c1eff4/volumes" Jan 23 09:39:16 crc kubenswrapper[4982]: I0123 09:39:16.041910 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-j6bdp"] Jan 23 09:39:16 crc kubenswrapper[4982]: I0123 09:39:16.053025 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-j6bdp"] Jan 23 09:39:17 crc kubenswrapper[4982]: I0123 09:39:17.842489 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc" path="/var/lib/kubelet/pods/26ece8c2-b47d-4ef8-abff-6c7bcde1a6fc/volumes" Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.435820 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.436482 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.436549 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.437333 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed33b1a30507fbf99e82a0af06fe01d0b4796979bfbb452c4c776529e48b651d"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.437386 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://ed33b1a30507fbf99e82a0af06fe01d0b4796979bfbb452c4c776529e48b651d" gracePeriod=600 Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.991736 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="ed33b1a30507fbf99e82a0af06fe01d0b4796979bfbb452c4c776529e48b651d" exitCode=0 Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.991824 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"ed33b1a30507fbf99e82a0af06fe01d0b4796979bfbb452c4c776529e48b651d"} Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.992155 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29"} Jan 23 09:39:41 crc kubenswrapper[4982]: I0123 09:39:41.992179 4982 scope.go:117] "RemoveContainer" containerID="4b3f86e81113e7541a08f2569025c66ff39a99651e6b7720bf541eda5f5345dd" Jan 23 09:39:44 crc kubenswrapper[4982]: I0123 09:39:44.751654 4982 scope.go:117] "RemoveContainer" containerID="bedd16242fbcc4e9f3d6f27c9e0551be756c78119068a6445508406993e56141" Jan 23 09:39:44 crc kubenswrapper[4982]: I0123 09:39:44.778707 4982 scope.go:117] "RemoveContainer" containerID="c8ce8058f450bab3ae06ad600da37fbadb5164c1fd7ae203b6c8a9f12c4dc17e" Jan 23 09:39:44 crc kubenswrapper[4982]: I0123 09:39:44.850252 4982 scope.go:117] "RemoveContainer" containerID="b7076c268357d5d2739a2ad14bc0c18e79807973d8854181342597e987b8df49" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.847050 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-686667bfcc-bxstn"] Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.859029 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.868428 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-h7hjd" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.868406 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.868739 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.868797 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.879268 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-686667bfcc-bxstn"] Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.930178 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.930462 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-log" containerID="cri-o://17cef1f0cabf4e5278e702b1ac9f4319db73ebaf8713f6e2e8341dc092cb93bc" gracePeriod=30 Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.931106 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-httpd" containerID="cri-o://8a65799603c27e31eb775be9e5ea32340fbe9357a118123727f2210c5987765e" gracePeriod=30 Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.949988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-scripts\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.950341 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-config-data\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.950474 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9200d89b-67fd-4267-bef8-842c200213be-logs\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.950507 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9200d89b-67fd-4267-bef8-842c200213be-horizon-secret-key\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.950545 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wkgl\" (UniqueName: \"kubernetes.io/projected/9200d89b-67fd-4267-bef8-842c200213be-kube-api-access-7wkgl\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.965641 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b8bcdc95c-db4xb"] Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.968991 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:46 crc kubenswrapper[4982]: I0123 09:39:46.988328 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b8bcdc95c-db4xb"] Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.029199 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.029415 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-log" containerID="cri-o://0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e" gracePeriod=30 Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.029744 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-httpd" containerID="cri-o://c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3" gracePeriod=30 Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.059805 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9200d89b-67fd-4267-bef8-842c200213be-logs\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.059863 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9200d89b-67fd-4267-bef8-842c200213be-horizon-secret-key\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.059896 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wkgl\" (UniqueName: \"kubernetes.io/projected/9200d89b-67fd-4267-bef8-842c200213be-kube-api-access-7wkgl\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.059958 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-scripts\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.060017 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-config-data\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.060440 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9200d89b-67fd-4267-bef8-842c200213be-logs\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.061212 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-scripts\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.063587 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-config-data\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.070032 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9200d89b-67fd-4267-bef8-842c200213be-horizon-secret-key\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.081527 4982 generic.go:334] "Generic (PLEG): container finished" podID="3986d8ac-65c3-4513-9369-83ea91215dac" containerID="17cef1f0cabf4e5278e702b1ac9f4319db73ebaf8713f6e2e8341dc092cb93bc" exitCode=143 Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.081572 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3986d8ac-65c3-4513-9369-83ea91215dac","Type":"ContainerDied","Data":"17cef1f0cabf4e5278e702b1ac9f4319db73ebaf8713f6e2e8341dc092cb93bc"} Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.083031 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wkgl\" (UniqueName: \"kubernetes.io/projected/9200d89b-67fd-4267-bef8-842c200213be-kube-api-access-7wkgl\") pod \"horizon-686667bfcc-bxstn\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.162385 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-scripts\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.162527 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjrk4\" (UniqueName: \"kubernetes.io/projected/86199b3e-3bb1-4425-96c9-8631b04ccb5d-kube-api-access-kjrk4\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.162592 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86199b3e-3bb1-4425-96c9-8631b04ccb5d-logs\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.162810 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86199b3e-3bb1-4425-96c9-8631b04ccb5d-horizon-secret-key\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.162986 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-config-data\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.180573 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.264334 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-scripts\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.264697 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjrk4\" (UniqueName: \"kubernetes.io/projected/86199b3e-3bb1-4425-96c9-8631b04ccb5d-kube-api-access-kjrk4\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.264737 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86199b3e-3bb1-4425-96c9-8631b04ccb5d-logs\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.264795 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86199b3e-3bb1-4425-96c9-8631b04ccb5d-horizon-secret-key\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.264859 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-config-data\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.265225 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-scripts\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.265254 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86199b3e-3bb1-4425-96c9-8631b04ccb5d-logs\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.266126 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-config-data\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.268749 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86199b3e-3bb1-4425-96c9-8631b04ccb5d-horizon-secret-key\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.288007 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjrk4\" (UniqueName: \"kubernetes.io/projected/86199b3e-3bb1-4425-96c9-8631b04ccb5d-kube-api-access-kjrk4\") pod \"horizon-b8bcdc95c-db4xb\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.588664 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:39:47 crc kubenswrapper[4982]: I0123 09:39:47.698713 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-686667bfcc-bxstn"] Jan 23 09:39:48 crc kubenswrapper[4982]: W0123 09:39:48.050516 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86199b3e_3bb1_4425_96c9_8631b04ccb5d.slice/crio-92bf8a81e019a0f40db995282cc07b54f0bb0b84261c7dae6a0e0b6e48b1095f WatchSource:0}: Error finding container 92bf8a81e019a0f40db995282cc07b54f0bb0b84261c7dae6a0e0b6e48b1095f: Status 404 returned error can't find the container with id 92bf8a81e019a0f40db995282cc07b54f0bb0b84261c7dae6a0e0b6e48b1095f Jan 23 09:39:48 crc kubenswrapper[4982]: I0123 09:39:48.062267 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b8bcdc95c-db4xb"] Jan 23 09:39:48 crc kubenswrapper[4982]: I0123 09:39:48.094849 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b8bcdc95c-db4xb" event={"ID":"86199b3e-3bb1-4425-96c9-8631b04ccb5d","Type":"ContainerStarted","Data":"92bf8a81e019a0f40db995282cc07b54f0bb0b84261c7dae6a0e0b6e48b1095f"} Jan 23 09:39:48 crc kubenswrapper[4982]: I0123 09:39:48.096573 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686667bfcc-bxstn" event={"ID":"9200d89b-67fd-4267-bef8-842c200213be","Type":"ContainerStarted","Data":"116804879d0d8de61d38de484debce427e6dc8c0f29ca1f1ce98e338120e81d8"} Jan 23 09:39:48 crc kubenswrapper[4982]: I0123 09:39:48.101368 4982 generic.go:334] "Generic (PLEG): container finished" podID="19167825-b7b1-4661-b978-4c63d9d13d12" containerID="0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e" exitCode=143 Jan 23 09:39:48 crc kubenswrapper[4982]: I0123 09:39:48.101398 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19167825-b7b1-4661-b978-4c63d9d13d12","Type":"ContainerDied","Data":"0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e"} Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.436290 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-686667bfcc-bxstn"] Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.474635 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-89645b8f8-vzg9j"] Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.476523 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.480020 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.486690 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89645b8f8-vzg9j"] Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.542926 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b8bcdc95c-db4xb"] Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.578024 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8647f885d6-jbzv7"] Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.580112 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.589558 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8647f885d6-jbzv7"] Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.629905 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-tls-certs\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.630147 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-scripts\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.630325 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-combined-ca-bundle\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.630433 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-logs\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.630528 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287nt\" (UniqueName: \"kubernetes.io/projected/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-kube-api-access-287nt\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.630611 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-config-data\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.630737 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-secret-key\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.732953 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-scripts\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733019 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-combined-ca-bundle\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733059 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-logs\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q57s\" (UniqueName: \"kubernetes.io/projected/928c735a-055d-42b8-9e73-5a963fa6df88-kube-api-access-7q57s\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733127 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-287nt\" (UniqueName: \"kubernetes.io/projected/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-kube-api-access-287nt\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733149 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-secret-key\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733169 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-config-data\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733211 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-secret-key\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733227 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/928c735a-055d-42b8-9e73-5a963fa6df88-logs\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733271 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-tls-certs\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733298 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-config-data\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733315 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-tls-certs\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733333 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-scripts\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.733356 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-combined-ca-bundle\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.734296 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-logs\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.736670 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-scripts\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.737427 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-config-data\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.753639 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-secret-key\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.753871 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-tls-certs\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.753859 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-combined-ca-bundle\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.759694 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-287nt\" (UniqueName: \"kubernetes.io/projected/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-kube-api-access-287nt\") pod \"horizon-89645b8f8-vzg9j\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.815556 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.835500 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-secret-key\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.835650 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/928c735a-055d-42b8-9e73-5a963fa6df88-logs\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.835814 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-config-data\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.835837 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-tls-certs\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.835886 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-combined-ca-bundle\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.836022 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-scripts\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.836199 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q57s\" (UniqueName: \"kubernetes.io/projected/928c735a-055d-42b8-9e73-5a963fa6df88-kube-api-access-7q57s\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.837262 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/928c735a-055d-42b8-9e73-5a963fa6df88-logs\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.837618 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-config-data\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.842486 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-combined-ca-bundle\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.843248 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-scripts\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.845900 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-tls-certs\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.851776 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-secret-key\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.858323 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q57s\" (UniqueName: \"kubernetes.io/projected/928c735a-055d-42b8-9e73-5a963fa6df88-kube-api-access-7q57s\") pod \"horizon-8647f885d6-jbzv7\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:49 crc kubenswrapper[4982]: I0123 09:39:49.916536 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.329906 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89645b8f8-vzg9j"] Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.463428 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8647f885d6-jbzv7"] Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.871616 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971098 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-scripts\") pod \"19167825-b7b1-4661-b978-4c63d9d13d12\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971207 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-httpd-run\") pod \"19167825-b7b1-4661-b978-4c63d9d13d12\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971277 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-config-data\") pod \"19167825-b7b1-4661-b978-4c63d9d13d12\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971334 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45xbt\" (UniqueName: \"kubernetes.io/projected/19167825-b7b1-4661-b978-4c63d9d13d12-kube-api-access-45xbt\") pod \"19167825-b7b1-4661-b978-4c63d9d13d12\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971394 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-combined-ca-bundle\") pod \"19167825-b7b1-4661-b978-4c63d9d13d12\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971470 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-logs\") pod \"19167825-b7b1-4661-b978-4c63d9d13d12\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971562 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-internal-tls-certs\") pod \"19167825-b7b1-4661-b978-4c63d9d13d12\" (UID: \"19167825-b7b1-4661-b978-4c63d9d13d12\") " Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.971934 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "19167825-b7b1-4661-b978-4c63d9d13d12" (UID: "19167825-b7b1-4661-b978-4c63d9d13d12"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.972462 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.972796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-logs" (OuterVolumeSpecName: "logs") pod "19167825-b7b1-4661-b978-4c63d9d13d12" (UID: "19167825-b7b1-4661-b978-4c63d9d13d12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.986295 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-scripts" (OuterVolumeSpecName: "scripts") pod "19167825-b7b1-4661-b978-4c63d9d13d12" (UID: "19167825-b7b1-4661-b978-4c63d9d13d12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:50 crc kubenswrapper[4982]: I0123 09:39:50.991722 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19167825-b7b1-4661-b978-4c63d9d13d12-kube-api-access-45xbt" (OuterVolumeSpecName: "kube-api-access-45xbt") pod "19167825-b7b1-4661-b978-4c63d9d13d12" (UID: "19167825-b7b1-4661-b978-4c63d9d13d12"). InnerVolumeSpecName "kube-api-access-45xbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.032912 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19167825-b7b1-4661-b978-4c63d9d13d12" (UID: "19167825-b7b1-4661-b978-4c63d9d13d12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.053850 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-config-data" (OuterVolumeSpecName: "config-data") pod "19167825-b7b1-4661-b978-4c63d9d13d12" (UID: "19167825-b7b1-4661-b978-4c63d9d13d12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.074950 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.074994 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45xbt\" (UniqueName: \"kubernetes.io/projected/19167825-b7b1-4661-b978-4c63d9d13d12-kube-api-access-45xbt\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.075008 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.075020 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19167825-b7b1-4661-b978-4c63d9d13d12-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.075030 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.095196 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "19167825-b7b1-4661-b978-4c63d9d13d12" (UID: "19167825-b7b1-4661-b978-4c63d9d13d12"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.142727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89645b8f8-vzg9j" event={"ID":"f1cf34b8-04ae-4bd5-be37-b5cd559e3526","Type":"ContainerStarted","Data":"3df5ce9ea7d982834f2ec12eeabfcb1c303a2b12d142bd0d788d84255c9e6226"} Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.145249 4982 generic.go:334] "Generic (PLEG): container finished" podID="19167825-b7b1-4661-b978-4c63d9d13d12" containerID="c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3" exitCode=0 Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.145290 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19167825-b7b1-4661-b978-4c63d9d13d12","Type":"ContainerDied","Data":"c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3"} Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.145308 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"19167825-b7b1-4661-b978-4c63d9d13d12","Type":"ContainerDied","Data":"f0bd2298083b6255299633d26f48f3aca62efe52a25210acb4140f42a833c4ec"} Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.145324 4982 scope.go:117] "RemoveContainer" containerID="c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.145452 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.149765 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647f885d6-jbzv7" event={"ID":"928c735a-055d-42b8-9e73-5a963fa6df88","Type":"ContainerStarted","Data":"5a7664a1bf6f432d6cffe5562174881dd7575e1e218c0c85246209277c068cba"} Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.155936 4982 generic.go:334] "Generic (PLEG): container finished" podID="3986d8ac-65c3-4513-9369-83ea91215dac" containerID="8a65799603c27e31eb775be9e5ea32340fbe9357a118123727f2210c5987765e" exitCode=0 Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.156260 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3986d8ac-65c3-4513-9369-83ea91215dac","Type":"ContainerDied","Data":"8a65799603c27e31eb775be9e5ea32340fbe9357a118123727f2210c5987765e"} Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.156315 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3986d8ac-65c3-4513-9369-83ea91215dac","Type":"ContainerDied","Data":"7b00f9d11fd38c9b556a0f5370d1b28709dae659006d5127605b307ffdcb2c43"} Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.156327 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b00f9d11fd38c9b556a0f5370d1b28709dae659006d5127605b307ffdcb2c43" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.170067 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.179134 4982 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19167825-b7b1-4661-b978-4c63d9d13d12-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.209151 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.221895 4982 scope.go:117] "RemoveContainer" containerID="0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.234500 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.251737 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:39:51 crc kubenswrapper[4982]: E0123 09:39:51.252290 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-httpd" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252314 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-httpd" Jan 23 09:39:51 crc kubenswrapper[4982]: E0123 09:39:51.252338 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-httpd" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252345 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-httpd" Jan 23 09:39:51 crc kubenswrapper[4982]: E0123 09:39:51.252367 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-log" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252373 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-log" Jan 23 09:39:51 crc kubenswrapper[4982]: E0123 09:39:51.252393 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-log" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252399 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-log" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252643 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-log" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252665 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-httpd" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252676 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" containerName="glance-log" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.252689 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" containerName="glance-httpd" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.253825 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.258840 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.258994 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.278233 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.294365 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-logs" (OuterVolumeSpecName: "logs") pod "3986d8ac-65c3-4513-9369-83ea91215dac" (UID: "3986d8ac-65c3-4513-9369-83ea91215dac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.295059 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-logs\") pod \"3986d8ac-65c3-4513-9369-83ea91215dac\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.295217 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-config-data\") pod \"3986d8ac-65c3-4513-9369-83ea91215dac\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.295269 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-combined-ca-bundle\") pod \"3986d8ac-65c3-4513-9369-83ea91215dac\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.295330 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64kkm\" (UniqueName: \"kubernetes.io/projected/3986d8ac-65c3-4513-9369-83ea91215dac-kube-api-access-64kkm\") pod \"3986d8ac-65c3-4513-9369-83ea91215dac\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.295349 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-httpd-run\") pod \"3986d8ac-65c3-4513-9369-83ea91215dac\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.295470 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-public-tls-certs\") pod \"3986d8ac-65c3-4513-9369-83ea91215dac\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.295508 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-scripts\") pod \"3986d8ac-65c3-4513-9369-83ea91215dac\" (UID: \"3986d8ac-65c3-4513-9369-83ea91215dac\") " Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.296153 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3986d8ac-65c3-4513-9369-83ea91215dac" (UID: "3986d8ac-65c3-4513-9369-83ea91215dac"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.296742 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.296842 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5cd222-e178-4af8-aa45-d396d580d2d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.297051 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.297101 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq9pj\" (UniqueName: \"kubernetes.io/projected/ec5cd222-e178-4af8-aa45-d396d580d2d5-kube-api-access-rq9pj\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.297430 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.298794 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec5cd222-e178-4af8-aa45-d396d580d2d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.298825 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.298930 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.299060 4982 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3986d8ac-65c3-4513-9369-83ea91215dac-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.299325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-scripts" (OuterVolumeSpecName: "scripts") pod "3986d8ac-65c3-4513-9369-83ea91215dac" (UID: "3986d8ac-65c3-4513-9369-83ea91215dac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.299969 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3986d8ac-65c3-4513-9369-83ea91215dac-kube-api-access-64kkm" (OuterVolumeSpecName: "kube-api-access-64kkm") pod "3986d8ac-65c3-4513-9369-83ea91215dac" (UID: "3986d8ac-65c3-4513-9369-83ea91215dac"). InnerVolumeSpecName "kube-api-access-64kkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.312583 4982 scope.go:117] "RemoveContainer" containerID="c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3" Jan 23 09:39:51 crc kubenswrapper[4982]: E0123 09:39:51.314761 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3\": container with ID starting with c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3 not found: ID does not exist" containerID="c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.314809 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3"} err="failed to get container status \"c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3\": rpc error: code = NotFound desc = could not find container \"c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3\": container with ID starting with c1aab3448900e5b09cbf853deb5fb812ad6faf67fb91e6d181e9e40f85234eb3 not found: ID does not exist" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.314834 4982 scope.go:117] "RemoveContainer" containerID="0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e" Jan 23 09:39:51 crc kubenswrapper[4982]: E0123 09:39:51.322172 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e\": container with ID starting with 0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e not found: ID does not exist" containerID="0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.322228 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e"} err="failed to get container status \"0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e\": rpc error: code = NotFound desc = could not find container \"0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e\": container with ID starting with 0451faa7e81af90c9fa9a38722d49a2610ee11adb38d2b85f39db20ffa9d2d7e not found: ID does not exist" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.338798 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3986d8ac-65c3-4513-9369-83ea91215dac" (UID: "3986d8ac-65c3-4513-9369-83ea91215dac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.365719 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-config-data" (OuterVolumeSpecName: "config-data") pod "3986d8ac-65c3-4513-9369-83ea91215dac" (UID: "3986d8ac-65c3-4513-9369-83ea91215dac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.380357 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3986d8ac-65c3-4513-9369-83ea91215dac" (UID: "3986d8ac-65c3-4513-9369-83ea91215dac"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.401490 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.401541 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq9pj\" (UniqueName: \"kubernetes.io/projected/ec5cd222-e178-4af8-aa45-d396d580d2d5-kube-api-access-rq9pj\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.401598 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.401658 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec5cd222-e178-4af8-aa45-d396d580d2d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.401673 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.401811 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.401896 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5cd222-e178-4af8-aa45-d396d580d2d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.402047 4982 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.402063 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.402076 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.402094 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3986d8ac-65c3-4513-9369-83ea91215dac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.402104 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64kkm\" (UniqueName: \"kubernetes.io/projected/3986d8ac-65c3-4513-9369-83ea91215dac-kube-api-access-64kkm\") on node \"crc\" DevicePath \"\"" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.402440 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec5cd222-e178-4af8-aa45-d396d580d2d5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.402795 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec5cd222-e178-4af8-aa45-d396d580d2d5-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.406121 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.406689 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.407849 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.408818 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec5cd222-e178-4af8-aa45-d396d580d2d5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.424449 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq9pj\" (UniqueName: \"kubernetes.io/projected/ec5cd222-e178-4af8-aa45-d396d580d2d5-kube-api-access-rq9pj\") pod \"glance-default-internal-api-0\" (UID: \"ec5cd222-e178-4af8-aa45-d396d580d2d5\") " pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.578821 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 09:39:51 crc kubenswrapper[4982]: I0123 09:39:51.847240 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19167825-b7b1-4661-b978-4c63d9d13d12" path="/var/lib/kubelet/pods/19167825-b7b1-4661-b978-4c63d9d13d12/volumes" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.179573 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.213838 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.227304 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.250206 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.256260 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.259769 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.259780 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.263740 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.440160 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.440246 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d38494-7e3d-4359-869b-ac0460e2cf52-logs\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.440268 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.440340 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d38494-7e3d-4359-869b-ac0460e2cf52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.440362 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.440416 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.440445 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv9z8\" (UniqueName: \"kubernetes.io/projected/25d38494-7e3d-4359-869b-ac0460e2cf52-kube-api-access-cv9z8\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.541937 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.542575 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d38494-7e3d-4359-869b-ac0460e2cf52-logs\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.542603 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.542669 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d38494-7e3d-4359-869b-ac0460e2cf52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.542689 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.542717 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.542743 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv9z8\" (UniqueName: \"kubernetes.io/projected/25d38494-7e3d-4359-869b-ac0460e2cf52-kube-api-access-cv9z8\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.543441 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25d38494-7e3d-4359-869b-ac0460e2cf52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.543747 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25d38494-7e3d-4359-869b-ac0460e2cf52-logs\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.548326 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-scripts\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.548526 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-config-data\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.549079 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.551613 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25d38494-7e3d-4359-869b-ac0460e2cf52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.566154 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv9z8\" (UniqueName: \"kubernetes.io/projected/25d38494-7e3d-4359-869b-ac0460e2cf52-kube-api-access-cv9z8\") pod \"glance-default-external-api-0\" (UID: \"25d38494-7e3d-4359-869b-ac0460e2cf52\") " pod="openstack/glance-default-external-api-0" Jan 23 09:39:52 crc kubenswrapper[4982]: I0123 09:39:52.611993 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 09:39:53 crc kubenswrapper[4982]: I0123 09:39:53.842782 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3986d8ac-65c3-4513-9369-83ea91215dac" path="/var/lib/kubelet/pods/3986d8ac-65c3-4513-9369-83ea91215dac/volumes" Jan 23 09:39:57 crc kubenswrapper[4982]: I0123 09:39:57.100249 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-82a9-account-create-update-qjj6n"] Jan 23 09:39:57 crc kubenswrapper[4982]: I0123 09:39:57.114690 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-82a9-account-create-update-qjj6n"] Jan 23 09:39:57 crc kubenswrapper[4982]: I0123 09:39:57.131604 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bh2x9"] Jan 23 09:39:57 crc kubenswrapper[4982]: I0123 09:39:57.151186 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bh2x9"] Jan 23 09:39:57 crc kubenswrapper[4982]: I0123 09:39:57.756494 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 09:39:57 crc kubenswrapper[4982]: W0123 09:39:57.780085 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25d38494_7e3d_4359_869b_ac0460e2cf52.slice/crio-1ee77419959d14f4ad00290476e6895b86e63e8df4fe6a95f686bf7efbde3bbd WatchSource:0}: Error finding container 1ee77419959d14f4ad00290476e6895b86e63e8df4fe6a95f686bf7efbde3bbd: Status 404 returned error can't find the container with id 1ee77419959d14f4ad00290476e6895b86e63e8df4fe6a95f686bf7efbde3bbd Jan 23 09:39:57 crc kubenswrapper[4982]: I0123 09:39:57.844707 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76" path="/var/lib/kubelet/pods/5e51f2f1-0ec5-46b2-ad82-9ac826ad9e76/volumes" Jan 23 09:39:57 crc kubenswrapper[4982]: I0123 09:39:57.845653 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8171e575-adcc-47d2-b4c4-bdea3cdabe65" path="/var/lib/kubelet/pods/8171e575-adcc-47d2-b4c4-bdea3cdabe65/volumes" Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.281336 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89645b8f8-vzg9j" event={"ID":"f1cf34b8-04ae-4bd5-be37-b5cd559e3526","Type":"ContainerStarted","Data":"e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.281695 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89645b8f8-vzg9j" event={"ID":"f1cf34b8-04ae-4bd5-be37-b5cd559e3526","Type":"ContainerStarted","Data":"89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.289803 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b8bcdc95c-db4xb" event={"ID":"86199b3e-3bb1-4425-96c9-8631b04ccb5d","Type":"ContainerStarted","Data":"f5e2e7372683319e2a89a8f415f4bb163dacf95a7fa2c6b9b8ac41997c200ffe"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.289852 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b8bcdc95c-db4xb" event={"ID":"86199b3e-3bb1-4425-96c9-8631b04ccb5d","Type":"ContainerStarted","Data":"90f50ff30ceaf705749777b84ada9e74be0f513fd226b3d77d26fa4161c18a70"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.289892 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b8bcdc95c-db4xb" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon-log" containerID="cri-o://90f50ff30ceaf705749777b84ada9e74be0f513fd226b3d77d26fa4161c18a70" gracePeriod=30 Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.289892 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b8bcdc95c-db4xb" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon" containerID="cri-o://f5e2e7372683319e2a89a8f415f4bb163dacf95a7fa2c6b9b8ac41997c200ffe" gracePeriod=30 Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.302560 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686667bfcc-bxstn" event={"ID":"9200d89b-67fd-4267-bef8-842c200213be","Type":"ContainerStarted","Data":"834377c33d847a4079e55572e29239e404b9b3cbaf321bad628e25e5b03fa672"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.302636 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686667bfcc-bxstn" event={"ID":"9200d89b-67fd-4267-bef8-842c200213be","Type":"ContainerStarted","Data":"a12f0d88db791421f881b94a9aa9dc8a60a2b88f70723a7b2ae5e18f95772cee"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.302692 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-686667bfcc-bxstn" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon-log" containerID="cri-o://a12f0d88db791421f881b94a9aa9dc8a60a2b88f70723a7b2ae5e18f95772cee" gracePeriod=30 Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.302700 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-686667bfcc-bxstn" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon" containerID="cri-o://834377c33d847a4079e55572e29239e404b9b3cbaf321bad628e25e5b03fa672" gracePeriod=30 Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.311566 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-89645b8f8-vzg9j" podStartSLOduration=2.604144799 podStartE2EDuration="9.311546714s" podCreationTimestamp="2026-01-23 09:39:49 +0000 UTC" firstStartedPulling="2026-01-23 09:39:50.353293719 +0000 UTC m=+6264.833657105" lastFinishedPulling="2026-01-23 09:39:57.060695634 +0000 UTC m=+6271.541059020" observedRunningTime="2026-01-23 09:39:58.30659423 +0000 UTC m=+6272.786957636" watchObservedRunningTime="2026-01-23 09:39:58.311546714 +0000 UTC m=+6272.791910120" Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.316051 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647f885d6-jbzv7" event={"ID":"928c735a-055d-42b8-9e73-5a963fa6df88","Type":"ContainerStarted","Data":"6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.316254 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647f885d6-jbzv7" event={"ID":"928c735a-055d-42b8-9e73-5a963fa6df88","Type":"ContainerStarted","Data":"64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.321822 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d38494-7e3d-4359-869b-ac0460e2cf52","Type":"ContainerStarted","Data":"1ee77419959d14f4ad00290476e6895b86e63e8df4fe6a95f686bf7efbde3bbd"} Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.349183 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b8bcdc95c-db4xb" podStartSLOduration=3.154775327 podStartE2EDuration="12.349160537s" podCreationTimestamp="2026-01-23 09:39:46 +0000 UTC" firstStartedPulling="2026-01-23 09:39:48.052684775 +0000 UTC m=+6262.533048161" lastFinishedPulling="2026-01-23 09:39:57.247069985 +0000 UTC m=+6271.727433371" observedRunningTime="2026-01-23 09:39:58.336616009 +0000 UTC m=+6272.816979395" watchObservedRunningTime="2026-01-23 09:39:58.349160537 +0000 UTC m=+6272.829523933" Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.449643 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-686667bfcc-bxstn" podStartSLOduration=3.153050461 podStartE2EDuration="12.449609014s" podCreationTimestamp="2026-01-23 09:39:46 +0000 UTC" firstStartedPulling="2026-01-23 09:39:47.703309862 +0000 UTC m=+6262.183673248" lastFinishedPulling="2026-01-23 09:39:56.999868415 +0000 UTC m=+6271.480231801" observedRunningTime="2026-01-23 09:39:58.371817538 +0000 UTC m=+6272.852180934" watchObservedRunningTime="2026-01-23 09:39:58.449609014 +0000 UTC m=+6272.929972400" Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.487073 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8647f885d6-jbzv7" podStartSLOduration=2.938771195 podStartE2EDuration="9.487045902s" podCreationTimestamp="2026-01-23 09:39:49 +0000 UTC" firstStartedPulling="2026-01-23 09:39:50.511038249 +0000 UTC m=+6264.991401635" lastFinishedPulling="2026-01-23 09:39:57.059312956 +0000 UTC m=+6271.539676342" observedRunningTime="2026-01-23 09:39:58.410259093 +0000 UTC m=+6272.890622489" watchObservedRunningTime="2026-01-23 09:39:58.487045902 +0000 UTC m=+6272.967409288" Jan 23 09:39:58 crc kubenswrapper[4982]: W0123 09:39:58.619862 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec5cd222_e178_4af8_aa45_d396d580d2d5.slice/crio-e36812716f8cb50ece9b0a066baaf888e9528458ab5dcec48ad0dee357b4f0e4 WatchSource:0}: Error finding container e36812716f8cb50ece9b0a066baaf888e9528458ab5dcec48ad0dee357b4f0e4: Status 404 returned error can't find the container with id e36812716f8cb50ece9b0a066baaf888e9528458ab5dcec48ad0dee357b4f0e4 Jan 23 09:39:58 crc kubenswrapper[4982]: I0123 09:39:58.627906 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.344748 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d38494-7e3d-4359-869b-ac0460e2cf52","Type":"ContainerStarted","Data":"e59e79fddeac43fb7d5c56d30443a6d2512fd206a9539b781d34a910d57c7dac"} Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.345080 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"25d38494-7e3d-4359-869b-ac0460e2cf52","Type":"ContainerStarted","Data":"94fafa3a0d16218c9d6185888d7a9216f16291d4567cbe15dda98d063140b929"} Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.347444 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec5cd222-e178-4af8-aa45-d396d580d2d5","Type":"ContainerStarted","Data":"af37e91ae583eefa8813c8ecac6f980c9cf7cd864a995901dd8c526e34f8ae21"} Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.347503 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec5cd222-e178-4af8-aa45-d396d580d2d5","Type":"ContainerStarted","Data":"e36812716f8cb50ece9b0a066baaf888e9528458ab5dcec48ad0dee357b4f0e4"} Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.405849 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.405818066 podStartE2EDuration="7.405818066s" podCreationTimestamp="2026-01-23 09:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:39:59.373124865 +0000 UTC m=+6273.853488271" watchObservedRunningTime="2026-01-23 09:39:59.405818066 +0000 UTC m=+6273.886181472" Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.816038 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.816136 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.916876 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:39:59 crc kubenswrapper[4982]: I0123 09:39:59.916954 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:40:00 crc kubenswrapper[4982]: I0123 09:40:00.365171 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec5cd222-e178-4af8-aa45-d396d580d2d5","Type":"ContainerStarted","Data":"2a9ae81dd0b0347eec5e6bb4e99e29c9fd384d0574ed024e8ac3498fccdc7346"} Jan 23 09:40:00 crc kubenswrapper[4982]: I0123 09:40:00.396489 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.396467107 podStartE2EDuration="9.396467107s" podCreationTimestamp="2026-01-23 09:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:40:00.388983925 +0000 UTC m=+6274.869347321" watchObservedRunningTime="2026-01-23 09:40:00.396467107 +0000 UTC m=+6274.876830513" Jan 23 09:40:01 crc kubenswrapper[4982]: I0123 09:40:01.579586 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:01 crc kubenswrapper[4982]: I0123 09:40:01.579672 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:01 crc kubenswrapper[4982]: I0123 09:40:01.634527 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:01 crc kubenswrapper[4982]: I0123 09:40:01.642669 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:02 crc kubenswrapper[4982]: I0123 09:40:02.384412 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:02 crc kubenswrapper[4982]: I0123 09:40:02.384455 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:02 crc kubenswrapper[4982]: I0123 09:40:02.612434 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 09:40:02 crc kubenswrapper[4982]: I0123 09:40:02.612790 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 09:40:02 crc kubenswrapper[4982]: I0123 09:40:02.656465 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 09:40:02 crc kubenswrapper[4982]: I0123 09:40:02.658420 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 09:40:03 crc kubenswrapper[4982]: I0123 09:40:03.404320 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 09:40:03 crc kubenswrapper[4982]: I0123 09:40:03.404363 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 09:40:05 crc kubenswrapper[4982]: I0123 09:40:05.435597 4982 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 09:40:06 crc kubenswrapper[4982]: I0123 09:40:06.420728 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 09:40:06 crc kubenswrapper[4982]: I0123 09:40:06.424186 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 09:40:07 crc kubenswrapper[4982]: I0123 09:40:07.091176 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gjzdw"] Jan 23 09:40:07 crc kubenswrapper[4982]: I0123 09:40:07.117554 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gjzdw"] Jan 23 09:40:07 crc kubenswrapper[4982]: I0123 09:40:07.181542 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:40:07 crc kubenswrapper[4982]: I0123 09:40:07.589661 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:40:07 crc kubenswrapper[4982]: I0123 09:40:07.849475 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e739740f-4882-439c-8deb-0b28947ed7d3" path="/var/lib/kubelet/pods/e739740f-4882-439c-8deb-0b28947ed7d3/volumes" Jan 23 09:40:09 crc kubenswrapper[4982]: I0123 09:40:09.817893 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.138:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.138:8443: connect: connection refused" Jan 23 09:40:09 crc kubenswrapper[4982]: I0123 09:40:09.918385 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8647f885d6-jbzv7" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.139:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8443: connect: connection refused" Jan 23 09:40:21 crc kubenswrapper[4982]: I0123 09:40:21.598570 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:40:21 crc kubenswrapper[4982]: I0123 09:40:21.714279 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:40:23 crc kubenswrapper[4982]: I0123 09:40:23.367024 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:40:23 crc kubenswrapper[4982]: I0123 09:40:23.471687 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89645b8f8-vzg9j"] Jan 23 09:40:23 crc kubenswrapper[4982]: I0123 09:40:23.471955 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon-log" containerID="cri-o://89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b" gracePeriod=30 Jan 23 09:40:23 crc kubenswrapper[4982]: I0123 09:40:23.472500 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" containerID="cri-o://e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c" gracePeriod=30 Jan 23 09:40:23 crc kubenswrapper[4982]: I0123 09:40:23.484946 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.138:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 23 09:40:23 crc kubenswrapper[4982]: I0123 09:40:23.658253 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:23 crc kubenswrapper[4982]: I0123 09:40:23.672464 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 09:40:26 crc kubenswrapper[4982]: I0123 09:40:26.871705 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.138:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:59470->10.217.1.138:8443: read: connection reset by peer" Jan 23 09:40:27 crc kubenswrapper[4982]: I0123 09:40:27.732688 4982 generic.go:334] "Generic (PLEG): container finished" podID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerID="e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c" exitCode=0 Jan 23 09:40:27 crc kubenswrapper[4982]: I0123 09:40:27.732886 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89645b8f8-vzg9j" event={"ID":"f1cf34b8-04ae-4bd5-be37-b5cd559e3526","Type":"ContainerDied","Data":"e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c"} Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.746983 4982 generic.go:334] "Generic (PLEG): container finished" podID="9200d89b-67fd-4267-bef8-842c200213be" containerID="834377c33d847a4079e55572e29239e404b9b3cbaf321bad628e25e5b03fa672" exitCode=137 Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.747348 4982 generic.go:334] "Generic (PLEG): container finished" podID="9200d89b-67fd-4267-bef8-842c200213be" containerID="a12f0d88db791421f881b94a9aa9dc8a60a2b88f70723a7b2ae5e18f95772cee" exitCode=137 Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.747032 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686667bfcc-bxstn" event={"ID":"9200d89b-67fd-4267-bef8-842c200213be","Type":"ContainerDied","Data":"834377c33d847a4079e55572e29239e404b9b3cbaf321bad628e25e5b03fa672"} Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.747403 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686667bfcc-bxstn" event={"ID":"9200d89b-67fd-4267-bef8-842c200213be","Type":"ContainerDied","Data":"a12f0d88db791421f881b94a9aa9dc8a60a2b88f70723a7b2ae5e18f95772cee"} Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.749942 4982 generic.go:334] "Generic (PLEG): container finished" podID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerID="f5e2e7372683319e2a89a8f415f4bb163dacf95a7fa2c6b9b8ac41997c200ffe" exitCode=137 Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.749970 4982 generic.go:334] "Generic (PLEG): container finished" podID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerID="90f50ff30ceaf705749777b84ada9e74be0f513fd226b3d77d26fa4161c18a70" exitCode=137 Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.749988 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b8bcdc95c-db4xb" event={"ID":"86199b3e-3bb1-4425-96c9-8631b04ccb5d","Type":"ContainerDied","Data":"f5e2e7372683319e2a89a8f415f4bb163dacf95a7fa2c6b9b8ac41997c200ffe"} Jan 23 09:40:28 crc kubenswrapper[4982]: I0123 09:40:28.750014 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b8bcdc95c-db4xb" event={"ID":"86199b3e-3bb1-4425-96c9-8631b04ccb5d","Type":"ContainerDied","Data":"90f50ff30ceaf705749777b84ada9e74be0f513fd226b3d77d26fa4161c18a70"} Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.037986 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.131257 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9200d89b-67fd-4267-bef8-842c200213be-horizon-secret-key\") pod \"9200d89b-67fd-4267-bef8-842c200213be\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.131361 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wkgl\" (UniqueName: \"kubernetes.io/projected/9200d89b-67fd-4267-bef8-842c200213be-kube-api-access-7wkgl\") pod \"9200d89b-67fd-4267-bef8-842c200213be\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.131496 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-config-data\") pod \"9200d89b-67fd-4267-bef8-842c200213be\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.131521 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9200d89b-67fd-4267-bef8-842c200213be-logs\") pod \"9200d89b-67fd-4267-bef8-842c200213be\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.131540 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-scripts\") pod \"9200d89b-67fd-4267-bef8-842c200213be\" (UID: \"9200d89b-67fd-4267-bef8-842c200213be\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.132149 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9200d89b-67fd-4267-bef8-842c200213be-logs" (OuterVolumeSpecName: "logs") pod "9200d89b-67fd-4267-bef8-842c200213be" (UID: "9200d89b-67fd-4267-bef8-842c200213be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.137815 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9200d89b-67fd-4267-bef8-842c200213be-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9200d89b-67fd-4267-bef8-842c200213be" (UID: "9200d89b-67fd-4267-bef8-842c200213be"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.138037 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9200d89b-67fd-4267-bef8-842c200213be-kube-api-access-7wkgl" (OuterVolumeSpecName: "kube-api-access-7wkgl") pod "9200d89b-67fd-4267-bef8-842c200213be" (UID: "9200d89b-67fd-4267-bef8-842c200213be"). InnerVolumeSpecName "kube-api-access-7wkgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.150521 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.172327 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-config-data" (OuterVolumeSpecName: "config-data") pod "9200d89b-67fd-4267-bef8-842c200213be" (UID: "9200d89b-67fd-4267-bef8-842c200213be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.177134 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-scripts" (OuterVolumeSpecName: "scripts") pod "9200d89b-67fd-4267-bef8-842c200213be" (UID: "9200d89b-67fd-4267-bef8-842c200213be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.234723 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86199b3e-3bb1-4425-96c9-8631b04ccb5d-logs\") pod \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.234998 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86199b3e-3bb1-4425-96c9-8631b04ccb5d-horizon-secret-key\") pod \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.235097 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86199b3e-3bb1-4425-96c9-8631b04ccb5d-logs" (OuterVolumeSpecName: "logs") pod "86199b3e-3bb1-4425-96c9-8631b04ccb5d" (UID: "86199b3e-3bb1-4425-96c9-8631b04ccb5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.235115 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-config-data\") pod \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.235300 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjrk4\" (UniqueName: \"kubernetes.io/projected/86199b3e-3bb1-4425-96c9-8631b04ccb5d-kube-api-access-kjrk4\") pod \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.235360 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-scripts\") pod \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\" (UID: \"86199b3e-3bb1-4425-96c9-8631b04ccb5d\") " Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.236301 4982 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9200d89b-67fd-4267-bef8-842c200213be-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.236319 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86199b3e-3bb1-4425-96c9-8631b04ccb5d-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.236330 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wkgl\" (UniqueName: \"kubernetes.io/projected/9200d89b-67fd-4267-bef8-842c200213be-kube-api-access-7wkgl\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.236339 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.236347 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9200d89b-67fd-4267-bef8-842c200213be-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.236356 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9200d89b-67fd-4267-bef8-842c200213be-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.238084 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86199b3e-3bb1-4425-96c9-8631b04ccb5d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "86199b3e-3bb1-4425-96c9-8631b04ccb5d" (UID: "86199b3e-3bb1-4425-96c9-8631b04ccb5d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.238649 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86199b3e-3bb1-4425-96c9-8631b04ccb5d-kube-api-access-kjrk4" (OuterVolumeSpecName: "kube-api-access-kjrk4") pod "86199b3e-3bb1-4425-96c9-8631b04ccb5d" (UID: "86199b3e-3bb1-4425-96c9-8631b04ccb5d"). InnerVolumeSpecName "kube-api-access-kjrk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.261471 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-scripts" (OuterVolumeSpecName: "scripts") pod "86199b3e-3bb1-4425-96c9-8631b04ccb5d" (UID: "86199b3e-3bb1-4425-96c9-8631b04ccb5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.261926 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-config-data" (OuterVolumeSpecName: "config-data") pod "86199b3e-3bb1-4425-96c9-8631b04ccb5d" (UID: "86199b3e-3bb1-4425-96c9-8631b04ccb5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.338844 4982 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/86199b3e-3bb1-4425-96c9-8631b04ccb5d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.339100 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.339190 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjrk4\" (UniqueName: \"kubernetes.io/projected/86199b3e-3bb1-4425-96c9-8631b04ccb5d-kube-api-access-kjrk4\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.339281 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86199b3e-3bb1-4425-96c9-8631b04ccb5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.763022 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b8bcdc95c-db4xb" event={"ID":"86199b3e-3bb1-4425-96c9-8631b04ccb5d","Type":"ContainerDied","Data":"92bf8a81e019a0f40db995282cc07b54f0bb0b84261c7dae6a0e0b6e48b1095f"} Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.763075 4982 scope.go:117] "RemoveContainer" containerID="f5e2e7372683319e2a89a8f415f4bb163dacf95a7fa2c6b9b8ac41997c200ffe" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.763076 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b8bcdc95c-db4xb" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.765602 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-686667bfcc-bxstn" event={"ID":"9200d89b-67fd-4267-bef8-842c200213be","Type":"ContainerDied","Data":"116804879d0d8de61d38de484debce427e6dc8c0f29ca1f1ce98e338120e81d8"} Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.765660 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-686667bfcc-bxstn" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.816238 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-686667bfcc-bxstn"] Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.817148 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.138:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.138:8443: connect: connection refused" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.826783 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-686667bfcc-bxstn"] Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.841801 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9200d89b-67fd-4267-bef8-842c200213be" path="/var/lib/kubelet/pods/9200d89b-67fd-4267-bef8-842c200213be/volumes" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.842605 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b8bcdc95c-db4xb"] Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.844806 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b8bcdc95c-db4xb"] Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.944372 4982 scope.go:117] "RemoveContainer" containerID="90f50ff30ceaf705749777b84ada9e74be0f513fd226b3d77d26fa4161c18a70" Jan 23 09:40:29 crc kubenswrapper[4982]: I0123 09:40:29.973082 4982 scope.go:117] "RemoveContainer" containerID="834377c33d847a4079e55572e29239e404b9b3cbaf321bad628e25e5b03fa672" Jan 23 09:40:30 crc kubenswrapper[4982]: I0123 09:40:30.133963 4982 scope.go:117] "RemoveContainer" containerID="a12f0d88db791421f881b94a9aa9dc8a60a2b88f70723a7b2ae5e18f95772cee" Jan 23 09:40:31 crc kubenswrapper[4982]: I0123 09:40:31.838125 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" path="/var/lib/kubelet/pods/86199b3e-3bb1-4425-96c9-8631b04ccb5d/volumes" Jan 23 09:40:39 crc kubenswrapper[4982]: I0123 09:40:39.817089 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.138:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.138:8443: connect: connection refused" Jan 23 09:40:45 crc kubenswrapper[4982]: I0123 09:40:45.043554 4982 scope.go:117] "RemoveContainer" containerID="9df2b43aaa2efef4ab3060ea5bcadbd4f0dc99e1d54da103d27cd9aed6760821" Jan 23 09:40:45 crc kubenswrapper[4982]: I0123 09:40:45.076646 4982 scope.go:117] "RemoveContainer" containerID="0473bc43df2939734ecf07803266c09009442222e3aa30024717e3403b6baf49" Jan 23 09:40:45 crc kubenswrapper[4982]: I0123 09:40:45.114837 4982 scope.go:117] "RemoveContainer" containerID="760de6f158aa466391aa0d4bfcf4f903d21f9c85bc52bb4e4f8bf2a362d4efd6" Jan 23 09:40:45 crc kubenswrapper[4982]: I0123 09:40:45.199276 4982 scope.go:117] "RemoveContainer" containerID="17cef1f0cabf4e5278e702b1ac9f4319db73ebaf8713f6e2e8341dc092cb93bc" Jan 23 09:40:45 crc kubenswrapper[4982]: I0123 09:40:45.235742 4982 scope.go:117] "RemoveContainer" containerID="8a65799603c27e31eb775be9e5ea32340fbe9357a118123727f2210c5987765e" Jan 23 09:40:49 crc kubenswrapper[4982]: I0123 09:40:49.817199 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-89645b8f8-vzg9j" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.138:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.138:8443: connect: connection refused" Jan 23 09:40:53 crc kubenswrapper[4982]: I0123 09:40:53.964875 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.043088 4982 generic.go:334] "Generic (PLEG): container finished" podID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerID="89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b" exitCode=137 Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.043158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89645b8f8-vzg9j" event={"ID":"f1cf34b8-04ae-4bd5-be37-b5cd559e3526","Type":"ContainerDied","Data":"89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b"} Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.043592 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89645b8f8-vzg9j" event={"ID":"f1cf34b8-04ae-4bd5-be37-b5cd559e3526","Type":"ContainerDied","Data":"3df5ce9ea7d982834f2ec12eeabfcb1c303a2b12d142bd0d788d84255c9e6226"} Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.043244 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89645b8f8-vzg9j" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.043695 4982 scope.go:117] "RemoveContainer" containerID="e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.155084 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-config-data\") pod \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.155486 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-scripts\") pod \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.155714 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-tls-certs\") pod \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.155935 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-logs\") pod \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.156099 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-secret-key\") pod \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.156409 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-287nt\" (UniqueName: \"kubernetes.io/projected/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-kube-api-access-287nt\") pod \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.156564 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-logs" (OuterVolumeSpecName: "logs") pod "f1cf34b8-04ae-4bd5-be37-b5cd559e3526" (UID: "f1cf34b8-04ae-4bd5-be37-b5cd559e3526"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.156801 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-combined-ca-bundle\") pod \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\" (UID: \"f1cf34b8-04ae-4bd5-be37-b5cd559e3526\") " Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.157758 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.160726 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-kube-api-access-287nt" (OuterVolumeSpecName: "kube-api-access-287nt") pod "f1cf34b8-04ae-4bd5-be37-b5cd559e3526" (UID: "f1cf34b8-04ae-4bd5-be37-b5cd559e3526"). InnerVolumeSpecName "kube-api-access-287nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.163440 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f1cf34b8-04ae-4bd5-be37-b5cd559e3526" (UID: "f1cf34b8-04ae-4bd5-be37-b5cd559e3526"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.194496 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-scripts" (OuterVolumeSpecName: "scripts") pod "f1cf34b8-04ae-4bd5-be37-b5cd559e3526" (UID: "f1cf34b8-04ae-4bd5-be37-b5cd559e3526"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.202772 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-config-data" (OuterVolumeSpecName: "config-data") pod "f1cf34b8-04ae-4bd5-be37-b5cd559e3526" (UID: "f1cf34b8-04ae-4bd5-be37-b5cd559e3526"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.208545 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1cf34b8-04ae-4bd5-be37-b5cd559e3526" (UID: "f1cf34b8-04ae-4bd5-be37-b5cd559e3526"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.225572 4982 scope.go:117] "RemoveContainer" containerID="89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.237758 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f1cf34b8-04ae-4bd5-be37-b5cd559e3526" (UID: "f1cf34b8-04ae-4bd5-be37-b5cd559e3526"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.260501 4982 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.260540 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-287nt\" (UniqueName: \"kubernetes.io/projected/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-kube-api-access-287nt\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.260569 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.260581 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.260589 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.260598 4982 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1cf34b8-04ae-4bd5-be37-b5cd559e3526-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.317110 4982 scope.go:117] "RemoveContainer" containerID="e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c" Jan 23 09:40:54 crc kubenswrapper[4982]: E0123 09:40:54.317867 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c\": container with ID starting with e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c not found: ID does not exist" containerID="e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.317910 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c"} err="failed to get container status \"e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c\": rpc error: code = NotFound desc = could not find container \"e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c\": container with ID starting with e6badcfb9fe816eb582272d3d45dc937dd6e04c57769cb393145be3f4251f96c not found: ID does not exist" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.317957 4982 scope.go:117] "RemoveContainer" containerID="89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b" Jan 23 09:40:54 crc kubenswrapper[4982]: E0123 09:40:54.318766 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b\": container with ID starting with 89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b not found: ID does not exist" containerID="89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.318823 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b"} err="failed to get container status \"89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b\": rpc error: code = NotFound desc = could not find container \"89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b\": container with ID starting with 89b2a7c626c4f2b66e5553173c1c1a60c73e32f0fc7e6eab6574295c0634741b not found: ID does not exist" Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.383696 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89645b8f8-vzg9j"] Jan 23 09:40:54 crc kubenswrapper[4982]: I0123 09:40:54.394812 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-89645b8f8-vzg9j"] Jan 23 09:40:55 crc kubenswrapper[4982]: I0123 09:40:55.839963 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" path="/var/lib/kubelet/pods/f1cf34b8-04ae-4bd5-be37-b5cd559e3526/volumes" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.212205 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5977db955d-zpcv9"] Jan 23 09:41:04 crc kubenswrapper[4982]: E0123 09:41:04.213308 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213323 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: E0123 09:41:04.213340 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213347 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: E0123 09:41:04.213365 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213372 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: E0123 09:41:04.213396 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213403 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: E0123 09:41:04.213418 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213425 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: E0123 09:41:04.213437 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213443 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213682 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213699 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213724 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="9200d89b-67fd-4267-bef8-842c200213be" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213736 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon-log" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213755 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="86199b3e-3bb1-4425-96c9-8631b04ccb5d" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.213767 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1cf34b8-04ae-4bd5-be37-b5cd559e3526" containerName="horizon" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.214985 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.236090 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5977db955d-zpcv9"] Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.378757 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c23088a-731e-4f50-9faa-70f390b4f3df-logs\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.378862 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-horizon-tls-certs\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.378895 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhfhv\" (UniqueName: \"kubernetes.io/projected/0c23088a-731e-4f50-9faa-70f390b4f3df-kube-api-access-zhfhv\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.378951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-combined-ca-bundle\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.379034 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-horizon-secret-key\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.379058 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c23088a-731e-4f50-9faa-70f390b4f3df-scripts\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.379093 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c23088a-731e-4f50-9faa-70f390b4f3df-config-data\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.480895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-horizon-tls-certs\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.480955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhfhv\" (UniqueName: \"kubernetes.io/projected/0c23088a-731e-4f50-9faa-70f390b4f3df-kube-api-access-zhfhv\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.481014 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-combined-ca-bundle\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.481099 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-horizon-secret-key\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.481124 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c23088a-731e-4f50-9faa-70f390b4f3df-scripts\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.481160 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c23088a-731e-4f50-9faa-70f390b4f3df-config-data\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.481184 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c23088a-731e-4f50-9faa-70f390b4f3df-logs\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.481661 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c23088a-731e-4f50-9faa-70f390b4f3df-logs\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.483355 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0c23088a-731e-4f50-9faa-70f390b4f3df-scripts\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.483939 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c23088a-731e-4f50-9faa-70f390b4f3df-config-data\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.488427 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-combined-ca-bundle\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.499957 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-horizon-tls-certs\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.499977 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0c23088a-731e-4f50-9faa-70f390b4f3df-horizon-secret-key\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.500857 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhfhv\" (UniqueName: \"kubernetes.io/projected/0c23088a-731e-4f50-9faa-70f390b4f3df-kube-api-access-zhfhv\") pod \"horizon-5977db955d-zpcv9\" (UID: \"0c23088a-731e-4f50-9faa-70f390b4f3df\") " pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:04 crc kubenswrapper[4982]: I0123 09:41:04.542229 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.071100 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5977db955d-zpcv9"] Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.158853 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977db955d-zpcv9" event={"ID":"0c23088a-731e-4f50-9faa-70f390b4f3df","Type":"ContainerStarted","Data":"890c7dd698d68a529ed325503e5b76f42b95d1ac0b08594f4c9a8dd6464ac597"} Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.750913 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-48xj2"] Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.753078 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-48xj2" Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.781482 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-48xj2"] Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.859378 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-8ece-account-create-update-rczvd"] Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.869281 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.870862 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8ece-account-create-update-rczvd"] Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.871524 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.935815 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv95t\" (UniqueName: \"kubernetes.io/projected/42c5b764-69e5-48e3-95fe-6a3e87500b01-kube-api-access-vv95t\") pod \"heat-db-create-48xj2\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " pod="openstack/heat-db-create-48xj2" Jan 23 09:41:05 crc kubenswrapper[4982]: I0123 09:41:05.936058 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c5b764-69e5-48e3-95fe-6a3e87500b01-operator-scripts\") pod \"heat-db-create-48xj2\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " pod="openstack/heat-db-create-48xj2" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.037554 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc82f838-20c6-440b-9526-cefa8af7a820-operator-scripts\") pod \"heat-8ece-account-create-update-rczvd\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.037840 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv95t\" (UniqueName: \"kubernetes.io/projected/42c5b764-69e5-48e3-95fe-6a3e87500b01-kube-api-access-vv95t\") pod \"heat-db-create-48xj2\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " pod="openstack/heat-db-create-48xj2" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.038015 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c5b764-69e5-48e3-95fe-6a3e87500b01-operator-scripts\") pod \"heat-db-create-48xj2\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " pod="openstack/heat-db-create-48xj2" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.038058 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxvq4\" (UniqueName: \"kubernetes.io/projected/dc82f838-20c6-440b-9526-cefa8af7a820-kube-api-access-qxvq4\") pod \"heat-8ece-account-create-update-rczvd\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.039238 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c5b764-69e5-48e3-95fe-6a3e87500b01-operator-scripts\") pod \"heat-db-create-48xj2\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " pod="openstack/heat-db-create-48xj2" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.055893 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv95t\" (UniqueName: \"kubernetes.io/projected/42c5b764-69e5-48e3-95fe-6a3e87500b01-kube-api-access-vv95t\") pod \"heat-db-create-48xj2\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " pod="openstack/heat-db-create-48xj2" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.077609 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-48xj2" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.139788 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxvq4\" (UniqueName: \"kubernetes.io/projected/dc82f838-20c6-440b-9526-cefa8af7a820-kube-api-access-qxvq4\") pod \"heat-8ece-account-create-update-rczvd\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.139955 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc82f838-20c6-440b-9526-cefa8af7a820-operator-scripts\") pod \"heat-8ece-account-create-update-rczvd\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.140910 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc82f838-20c6-440b-9526-cefa8af7a820-operator-scripts\") pod \"heat-8ece-account-create-update-rczvd\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.162458 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxvq4\" (UniqueName: \"kubernetes.io/projected/dc82f838-20c6-440b-9526-cefa8af7a820-kube-api-access-qxvq4\") pod \"heat-8ece-account-create-update-rczvd\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.172356 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977db955d-zpcv9" event={"ID":"0c23088a-731e-4f50-9faa-70f390b4f3df","Type":"ContainerStarted","Data":"e8d90ce8e78fa7b794f5a9659a10b8ce00a0e8ec27ba997266a27ec10c2bf857"} Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.172418 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5977db955d-zpcv9" event={"ID":"0c23088a-731e-4f50-9faa-70f390b4f3df","Type":"ContainerStarted","Data":"a7adeac36d5ea7b9049ee9e25626b4c8e02a6dac0838869415f05e85ee426b0a"} Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.200344 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:06 crc kubenswrapper[4982]: I0123 09:41:06.200708 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5977db955d-zpcv9" podStartSLOduration=2.200683434 podStartE2EDuration="2.200683434s" podCreationTimestamp="2026-01-23 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:41:06.189082926 +0000 UTC m=+6340.669446312" watchObservedRunningTime="2026-01-23 09:41:06.200683434 +0000 UTC m=+6340.681046820" Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:06.629830 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-48xj2"] Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:06.770763 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-8ece-account-create-update-rczvd"] Jan 23 09:41:07 crc kubenswrapper[4982]: W0123 09:41:06.777999 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc82f838_20c6_440b_9526_cefa8af7a820.slice/crio-c65af56e6981872a8418186e75bd55dbb8c7cdb133b13fcc6a212f5ac0550643 WatchSource:0}: Error finding container c65af56e6981872a8418186e75bd55dbb8c7cdb133b13fcc6a212f5ac0550643: Status 404 returned error can't find the container with id c65af56e6981872a8418186e75bd55dbb8c7cdb133b13fcc6a212f5ac0550643 Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:07.201001 4982 generic.go:334] "Generic (PLEG): container finished" podID="42c5b764-69e5-48e3-95fe-6a3e87500b01" containerID="629922a47912269303bf2d86e91efd3b3de349600ffd03f52a2a9e7ea49c42ba" exitCode=0 Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:07.201067 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-48xj2" event={"ID":"42c5b764-69e5-48e3-95fe-6a3e87500b01","Type":"ContainerDied","Data":"629922a47912269303bf2d86e91efd3b3de349600ffd03f52a2a9e7ea49c42ba"} Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:07.201096 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-48xj2" event={"ID":"42c5b764-69e5-48e3-95fe-6a3e87500b01","Type":"ContainerStarted","Data":"3b423816c45525d6dce07b7378fcbdc357fe4b7131d3fd2ce28aebd08bba73dd"} Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:07.204782 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ece-account-create-update-rczvd" event={"ID":"dc82f838-20c6-440b-9526-cefa8af7a820","Type":"ContainerStarted","Data":"40623f9098be15db43ad4b033f6f4c3cde1bbdf13d7159bbe8d986172b18a330"} Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:07.204823 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ece-account-create-update-rczvd" event={"ID":"dc82f838-20c6-440b-9526-cefa8af7a820","Type":"ContainerStarted","Data":"c65af56e6981872a8418186e75bd55dbb8c7cdb133b13fcc6a212f5ac0550643"} Jan 23 09:41:07 crc kubenswrapper[4982]: I0123 09:41:07.242289 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-8ece-account-create-update-rczvd" podStartSLOduration=2.242269085 podStartE2EDuration="2.242269085s" podCreationTimestamp="2026-01-23 09:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:41:07.232936249 +0000 UTC m=+6341.713299635" watchObservedRunningTime="2026-01-23 09:41:07.242269085 +0000 UTC m=+6341.722632471" Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.215314 4982 generic.go:334] "Generic (PLEG): container finished" podID="dc82f838-20c6-440b-9526-cefa8af7a820" containerID="40623f9098be15db43ad4b033f6f4c3cde1bbdf13d7159bbe8d986172b18a330" exitCode=0 Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.216304 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ece-account-create-update-rczvd" event={"ID":"dc82f838-20c6-440b-9526-cefa8af7a820","Type":"ContainerDied","Data":"40623f9098be15db43ad4b033f6f4c3cde1bbdf13d7159bbe8d986172b18a330"} Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.683006 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-48xj2" Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.821285 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv95t\" (UniqueName: \"kubernetes.io/projected/42c5b764-69e5-48e3-95fe-6a3e87500b01-kube-api-access-vv95t\") pod \"42c5b764-69e5-48e3-95fe-6a3e87500b01\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.821596 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c5b764-69e5-48e3-95fe-6a3e87500b01-operator-scripts\") pod \"42c5b764-69e5-48e3-95fe-6a3e87500b01\" (UID: \"42c5b764-69e5-48e3-95fe-6a3e87500b01\") " Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.822261 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c5b764-69e5-48e3-95fe-6a3e87500b01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42c5b764-69e5-48e3-95fe-6a3e87500b01" (UID: "42c5b764-69e5-48e3-95fe-6a3e87500b01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.826729 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c5b764-69e5-48e3-95fe-6a3e87500b01-kube-api-access-vv95t" (OuterVolumeSpecName: "kube-api-access-vv95t") pod "42c5b764-69e5-48e3-95fe-6a3e87500b01" (UID: "42c5b764-69e5-48e3-95fe-6a3e87500b01"). InnerVolumeSpecName "kube-api-access-vv95t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.925800 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv95t\" (UniqueName: \"kubernetes.io/projected/42c5b764-69e5-48e3-95fe-6a3e87500b01-kube-api-access-vv95t\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:08 crc kubenswrapper[4982]: I0123 09:41:08.925836 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42c5b764-69e5-48e3-95fe-6a3e87500b01-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.249216 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-48xj2" Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.252357 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-48xj2" event={"ID":"42c5b764-69e5-48e3-95fe-6a3e87500b01","Type":"ContainerDied","Data":"3b423816c45525d6dce07b7378fcbdc357fe4b7131d3fd2ce28aebd08bba73dd"} Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.252424 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b423816c45525d6dce07b7378fcbdc357fe4b7131d3fd2ce28aebd08bba73dd" Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.673244 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.852513 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc82f838-20c6-440b-9526-cefa8af7a820-operator-scripts\") pod \"dc82f838-20c6-440b-9526-cefa8af7a820\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.852856 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxvq4\" (UniqueName: \"kubernetes.io/projected/dc82f838-20c6-440b-9526-cefa8af7a820-kube-api-access-qxvq4\") pod \"dc82f838-20c6-440b-9526-cefa8af7a820\" (UID: \"dc82f838-20c6-440b-9526-cefa8af7a820\") " Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.855984 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc82f838-20c6-440b-9526-cefa8af7a820-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc82f838-20c6-440b-9526-cefa8af7a820" (UID: "dc82f838-20c6-440b-9526-cefa8af7a820"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.873604 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc82f838-20c6-440b-9526-cefa8af7a820-kube-api-access-qxvq4" (OuterVolumeSpecName: "kube-api-access-qxvq4") pod "dc82f838-20c6-440b-9526-cefa8af7a820" (UID: "dc82f838-20c6-440b-9526-cefa8af7a820"). InnerVolumeSpecName "kube-api-access-qxvq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.955619 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxvq4\" (UniqueName: \"kubernetes.io/projected/dc82f838-20c6-440b-9526-cefa8af7a820-kube-api-access-qxvq4\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:09 crc kubenswrapper[4982]: I0123 09:41:09.955685 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc82f838-20c6-440b-9526-cefa8af7a820-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:10 crc kubenswrapper[4982]: I0123 09:41:10.274879 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-8ece-account-create-update-rczvd" event={"ID":"dc82f838-20c6-440b-9526-cefa8af7a820","Type":"ContainerDied","Data":"c65af56e6981872a8418186e75bd55dbb8c7cdb133b13fcc6a212f5ac0550643"} Jan 23 09:41:10 crc kubenswrapper[4982]: I0123 09:41:10.274924 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65af56e6981872a8418186e75bd55dbb8c7cdb133b13fcc6a212f5ac0550643" Jan 23 09:41:10 crc kubenswrapper[4982]: I0123 09:41:10.274934 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-8ece-account-create-update-rczvd" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.060594 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-xxwv2"] Jan 23 09:41:11 crc kubenswrapper[4982]: E0123 09:41:11.061536 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c5b764-69e5-48e3-95fe-6a3e87500b01" containerName="mariadb-database-create" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.061557 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c5b764-69e5-48e3-95fe-6a3e87500b01" containerName="mariadb-database-create" Jan 23 09:41:11 crc kubenswrapper[4982]: E0123 09:41:11.061588 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc82f838-20c6-440b-9526-cefa8af7a820" containerName="mariadb-account-create-update" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.061596 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc82f838-20c6-440b-9526-cefa8af7a820" containerName="mariadb-account-create-update" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.061852 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c5b764-69e5-48e3-95fe-6a3e87500b01" containerName="mariadb-database-create" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.061876 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc82f838-20c6-440b-9526-cefa8af7a820" containerName="mariadb-account-create-update" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.062666 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.067102 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.067250 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rwtwb" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.080272 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xxwv2"] Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.193431 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-combined-ca-bundle\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.193495 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s586h\" (UniqueName: \"kubernetes.io/projected/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-kube-api-access-s586h\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.193871 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-config-data\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.295516 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s586h\" (UniqueName: \"kubernetes.io/projected/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-kube-api-access-s586h\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.295654 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-config-data\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.295750 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-combined-ca-bundle\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.301067 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-config-data\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.301502 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-combined-ca-bundle\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.311027 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s586h\" (UniqueName: \"kubernetes.io/projected/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-kube-api-access-s586h\") pod \"heat-db-sync-xxwv2\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.382044 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:11 crc kubenswrapper[4982]: I0123 09:41:11.988674 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-xxwv2"] Jan 23 09:41:12 crc kubenswrapper[4982]: I0123 09:41:12.001216 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:41:12 crc kubenswrapper[4982]: I0123 09:41:12.293794 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xxwv2" event={"ID":"cc085d0e-3b5f-40ac-b0fd-a5da6c671173","Type":"ContainerStarted","Data":"b8457971a3bf8fd8b5d776cdaa88d01897d8694ae88cc77a2417e3cbbd2b33ed"} Jan 23 09:41:14 crc kubenswrapper[4982]: I0123 09:41:14.543969 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:14 crc kubenswrapper[4982]: I0123 09:41:14.546419 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:20 crc kubenswrapper[4982]: I0123 09:41:20.393087 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xxwv2" event={"ID":"cc085d0e-3b5f-40ac-b0fd-a5da6c671173","Type":"ContainerStarted","Data":"85b06b5030db05e3c9e2e029f4b6e88c32cca69cf48df394d9eda4b87aaa24f0"} Jan 23 09:41:20 crc kubenswrapper[4982]: I0123 09:41:20.409755 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-xxwv2" podStartSLOduration=1.484218115 podStartE2EDuration="9.409731858s" podCreationTimestamp="2026-01-23 09:41:11 +0000 UTC" firstStartedPulling="2026-01-23 09:41:12.000996242 +0000 UTC m=+6346.481359638" lastFinishedPulling="2026-01-23 09:41:19.926509995 +0000 UTC m=+6354.406873381" observedRunningTime="2026-01-23 09:41:20.406296313 +0000 UTC m=+6354.886659699" watchObservedRunningTime="2026-01-23 09:41:20.409731858 +0000 UTC m=+6354.890095244" Jan 23 09:41:23 crc kubenswrapper[4982]: I0123 09:41:23.435298 4982 generic.go:334] "Generic (PLEG): container finished" podID="cc085d0e-3b5f-40ac-b0fd-a5da6c671173" containerID="85b06b5030db05e3c9e2e029f4b6e88c32cca69cf48df394d9eda4b87aaa24f0" exitCode=0 Jan 23 09:41:23 crc kubenswrapper[4982]: I0123 09:41:23.435438 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xxwv2" event={"ID":"cc085d0e-3b5f-40ac-b0fd-a5da6c671173","Type":"ContainerDied","Data":"85b06b5030db05e3c9e2e029f4b6e88c32cca69cf48df394d9eda4b87aaa24f0"} Jan 23 09:41:24 crc kubenswrapper[4982]: I0123 09:41:24.546549 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5977db955d-zpcv9" podUID="0c23088a-731e-4f50-9faa-70f390b4f3df" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.142:8443: connect: connection refused" Jan 23 09:41:24 crc kubenswrapper[4982]: I0123 09:41:24.916351 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.025496 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-combined-ca-bundle\") pod \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.025539 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-config-data\") pod \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.025572 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s586h\" (UniqueName: \"kubernetes.io/projected/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-kube-api-access-s586h\") pod \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\" (UID: \"cc085d0e-3b5f-40ac-b0fd-a5da6c671173\") " Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.035356 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-kube-api-access-s586h" (OuterVolumeSpecName: "kube-api-access-s586h") pod "cc085d0e-3b5f-40ac-b0fd-a5da6c671173" (UID: "cc085d0e-3b5f-40ac-b0fd-a5da6c671173"). InnerVolumeSpecName "kube-api-access-s586h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.073890 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f518-account-create-update-qxfr8"] Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.077363 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc085d0e-3b5f-40ac-b0fd-a5da6c671173" (UID: "cc085d0e-3b5f-40ac-b0fd-a5da6c671173"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.098841 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8f8bd"] Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.111403 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f518-account-create-update-qxfr8"] Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.121116 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8f8bd"] Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.128413 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.128522 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s586h\" (UniqueName: \"kubernetes.io/projected/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-kube-api-access-s586h\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.145387 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-config-data" (OuterVolumeSpecName: "config-data") pod "cc085d0e-3b5f-40ac-b0fd-a5da6c671173" (UID: "cc085d0e-3b5f-40ac-b0fd-a5da6c671173"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.230170 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc085d0e-3b5f-40ac-b0fd-a5da6c671173-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.457155 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-xxwv2" event={"ID":"cc085d0e-3b5f-40ac-b0fd-a5da6c671173","Type":"ContainerDied","Data":"b8457971a3bf8fd8b5d776cdaa88d01897d8694ae88cc77a2417e3cbbd2b33ed"} Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.457433 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8457971a3bf8fd8b5d776cdaa88d01897d8694ae88cc77a2417e3cbbd2b33ed" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.457246 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-xxwv2" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.854165 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5d8863-69f4-487e-b334-344289519860" path="/var/lib/kubelet/pods/5a5d8863-69f4-487e-b334-344289519860/volumes" Jan 23 09:41:25 crc kubenswrapper[4982]: I0123 09:41:25.855053 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f723816-9cde-4d95-af5b-6028f03f3374" path="/var/lib/kubelet/pods/8f723816-9cde-4d95-af5b-6028f03f3374/volumes" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.818826 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b54869999-pdzfb"] Jan 23 09:41:26 crc kubenswrapper[4982]: E0123 09:41:26.819898 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc085d0e-3b5f-40ac-b0fd-a5da6c671173" containerName="heat-db-sync" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.819984 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc085d0e-3b5f-40ac-b0fd-a5da6c671173" containerName="heat-db-sync" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.820286 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc085d0e-3b5f-40ac-b0fd-a5da6c671173" containerName="heat-db-sync" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.821025 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.824479 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rwtwb" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.824707 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.824848 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.834589 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b54869999-pdzfb"] Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.969804 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.969855 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-combined-ca-bundle\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.970114 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llsc6\" (UniqueName: \"kubernetes.io/projected/3bd2969b-823e-47c3-bbca-42d4f738df63-kube-api-access-llsc6\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:26 crc kubenswrapper[4982]: I0123 09:41:26.970178 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data-custom\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.015292 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-697fbb5cdf-v5p2z"] Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.017087 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.020139 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.041060 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-9fcdf944-5vdsm"] Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.042451 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.055012 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.057898 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-9fcdf944-5vdsm"] Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.071899 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llsc6\" (UniqueName: \"kubernetes.io/projected/3bd2969b-823e-47c3-bbca-42d4f738df63-kube-api-access-llsc6\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.071959 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data-custom\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.072050 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.072069 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-combined-ca-bundle\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.078182 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-697fbb5cdf-v5p2z"] Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.083116 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data-custom\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.091164 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-combined-ca-bundle\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.112289 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llsc6\" (UniqueName: \"kubernetes.io/projected/3bd2969b-823e-47c3-bbca-42d4f738df63-kube-api-access-llsc6\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.113106 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data\") pod \"heat-engine-7b54869999-pdzfb\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.159695 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.174585 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x2k7\" (UniqueName: \"kubernetes.io/projected/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-kube-api-access-8x2k7\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.174662 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data-custom\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.174756 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhwd\" (UniqueName: \"kubernetes.io/projected/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-kube-api-access-gjhwd\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.174917 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.174949 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.174978 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-combined-ca-bundle\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.175016 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-combined-ca-bundle\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.175102 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data-custom\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.276997 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.277191 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-combined-ca-bundle\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.277237 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-combined-ca-bundle\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.277311 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data-custom\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.277341 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x2k7\" (UniqueName: \"kubernetes.io/projected/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-kube-api-access-8x2k7\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.277366 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data-custom\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.277400 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjhwd\" (UniqueName: \"kubernetes.io/projected/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-kube-api-access-gjhwd\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.277493 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.286026 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-combined-ca-bundle\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.286443 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.293062 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data-custom\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.317383 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-combined-ca-bundle\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.321296 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x2k7\" (UniqueName: \"kubernetes.io/projected/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-kube-api-access-8x2k7\") pod \"heat-cfnapi-697fbb5cdf-v5p2z\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.322781 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.323370 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data-custom\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.329490 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjhwd\" (UniqueName: \"kubernetes.io/projected/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-kube-api-access-gjhwd\") pod \"heat-api-9fcdf944-5vdsm\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.364022 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.385752 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:27 crc kubenswrapper[4982]: I0123 09:41:27.801651 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b54869999-pdzfb"] Jan 23 09:41:28 crc kubenswrapper[4982]: I0123 09:41:28.020349 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-9fcdf944-5vdsm"] Jan 23 09:41:28 crc kubenswrapper[4982]: I0123 09:41:28.501474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9fcdf944-5vdsm" event={"ID":"e67bef95-3f1a-4ebd-920f-eb5062c3eb52","Type":"ContainerStarted","Data":"3fd8b7c7857748f04a85fda1d167b1fd8905f77471135f138aaf104a141ff7c5"} Jan 23 09:41:28 crc kubenswrapper[4982]: I0123 09:41:28.510800 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b54869999-pdzfb" event={"ID":"3bd2969b-823e-47c3-bbca-42d4f738df63","Type":"ContainerStarted","Data":"951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c"} Jan 23 09:41:28 crc kubenswrapper[4982]: I0123 09:41:28.511006 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b54869999-pdzfb" event={"ID":"3bd2969b-823e-47c3-bbca-42d4f738df63","Type":"ContainerStarted","Data":"33da7f7eeda61a299527c9bf9f18340a4cbb5549304496b83ee97f4549adfb81"} Jan 23 09:41:28 crc kubenswrapper[4982]: I0123 09:41:28.512218 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:28 crc kubenswrapper[4982]: I0123 09:41:28.586035 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b54869999-pdzfb" podStartSLOduration=2.586013215 podStartE2EDuration="2.586013215s" podCreationTimestamp="2026-01-23 09:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:41:28.544986267 +0000 UTC m=+6363.025349643" watchObservedRunningTime="2026-01-23 09:41:28.586013215 +0000 UTC m=+6363.066376601" Jan 23 09:41:28 crc kubenswrapper[4982]: I0123 09:41:28.591596 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-697fbb5cdf-v5p2z"] Jan 23 09:41:29 crc kubenswrapper[4982]: I0123 09:41:29.528818 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" event={"ID":"43aeb060-9827-42ba-9e0a-e8c3b7147dfa","Type":"ContainerStarted","Data":"7145000cc0bbf7a94d575cf3ab0dd66fce83f92da3d984657df2f0af41b52a62"} Jan 23 09:41:31 crc kubenswrapper[4982]: I0123 09:41:31.552959 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" event={"ID":"43aeb060-9827-42ba-9e0a-e8c3b7147dfa","Type":"ContainerStarted","Data":"82f6971e024872c0e78cd2a79c11855488d5ba73996f4cbc7ce2d3a0749c1fda"} Jan 23 09:41:31 crc kubenswrapper[4982]: I0123 09:41:31.554585 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:31 crc kubenswrapper[4982]: I0123 09:41:31.555536 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9fcdf944-5vdsm" event={"ID":"e67bef95-3f1a-4ebd-920f-eb5062c3eb52","Type":"ContainerStarted","Data":"6f5cc2f5bec220a949fb36522ebad4722d02368590df0247cbaab8d9be7d642b"} Jan 23 09:41:31 crc kubenswrapper[4982]: I0123 09:41:31.555782 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:31 crc kubenswrapper[4982]: I0123 09:41:31.576576 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" podStartSLOduration=3.623429132 podStartE2EDuration="5.576559588s" podCreationTimestamp="2026-01-23 09:41:26 +0000 UTC" firstStartedPulling="2026-01-23 09:41:28.584157184 +0000 UTC m=+6363.064520570" lastFinishedPulling="2026-01-23 09:41:30.53728764 +0000 UTC m=+6365.017651026" observedRunningTime="2026-01-23 09:41:31.574974924 +0000 UTC m=+6366.055338310" watchObservedRunningTime="2026-01-23 09:41:31.576559588 +0000 UTC m=+6366.056922974" Jan 23 09:41:31 crc kubenswrapper[4982]: I0123 09:41:31.599134 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-9fcdf944-5vdsm" podStartSLOduration=3.320536515 podStartE2EDuration="5.599117768s" podCreationTimestamp="2026-01-23 09:41:26 +0000 UTC" firstStartedPulling="2026-01-23 09:41:28.250759129 +0000 UTC m=+6362.731122515" lastFinishedPulling="2026-01-23 09:41:30.529340382 +0000 UTC m=+6365.009703768" observedRunningTime="2026-01-23 09:41:31.596204758 +0000 UTC m=+6366.076568144" watchObservedRunningTime="2026-01-23 09:41:31.599117768 +0000 UTC m=+6366.079481144" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.044184 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-bs57c"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.054899 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-bs57c"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.470706 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6c94454b9f-m7jtc"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.473080 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.490453 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6db55f868d-kb8r2"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.492507 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.533594 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6c94454b9f-m7jtc"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.568016 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-config-data-custom\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.568064 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-combined-ca-bundle\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.568116 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24r6\" (UniqueName: \"kubernetes.io/projected/880081ea-a53e-4a63-9ea8-031a62596458-kube-api-access-n24r6\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.568176 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-config-data\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.574900 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5fdc5c8998-xk56g"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.576926 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.603475 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6db55f868d-kb8r2"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.617259 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fdc5c8998-xk56g"] Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.669830 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24r6\" (UniqueName: \"kubernetes.io/projected/880081ea-a53e-4a63-9ea8-031a62596458-kube-api-access-n24r6\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.669924 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-config-data\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.669947 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data-custom\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.669991 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670041 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670067 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data-custom\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670132 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-combined-ca-bundle\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670264 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhh25\" (UniqueName: \"kubernetes.io/projected/57de2141-7360-4ac2-a13e-1e89f4bfc881-kube-api-access-jhh25\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670304 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q77p\" (UniqueName: \"kubernetes.io/projected/77758066-fa47-443d-bb66-9f05cae402bb-kube-api-access-8q77p\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670349 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-combined-ca-bundle\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670423 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-config-data-custom\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.670454 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-combined-ca-bundle\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.685511 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-config-data-custom\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.686722 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-config-data\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.687482 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/880081ea-a53e-4a63-9ea8-031a62596458-combined-ca-bundle\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.688997 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24r6\" (UniqueName: \"kubernetes.io/projected/880081ea-a53e-4a63-9ea8-031a62596458-kube-api-access-n24r6\") pod \"heat-engine-6c94454b9f-m7jtc\" (UID: \"880081ea-a53e-4a63-9ea8-031a62596458\") " pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772125 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772187 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data-custom\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772254 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-combined-ca-bundle\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772307 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhh25\" (UniqueName: \"kubernetes.io/projected/57de2141-7360-4ac2-a13e-1e89f4bfc881-kube-api-access-jhh25\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772332 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q77p\" (UniqueName: \"kubernetes.io/projected/77758066-fa47-443d-bb66-9f05cae402bb-kube-api-access-8q77p\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772367 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-combined-ca-bundle\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772473 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data-custom\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.772516 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.777525 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data-custom\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.777941 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.778377 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.780576 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-combined-ca-bundle\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.786023 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-combined-ca-bundle\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.787342 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data-custom\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.793877 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhh25\" (UniqueName: \"kubernetes.io/projected/57de2141-7360-4ac2-a13e-1e89f4bfc881-kube-api-access-jhh25\") pod \"heat-api-6db55f868d-kb8r2\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.797662 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q77p\" (UniqueName: \"kubernetes.io/projected/77758066-fa47-443d-bb66-9f05cae402bb-kube-api-access-8q77p\") pod \"heat-cfnapi-5fdc5c8998-xk56g\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.813438 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.829825 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:34 crc kubenswrapper[4982]: I0123 09:41:34.906232 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.482575 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6c94454b9f-m7jtc"] Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.502510 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6db55f868d-kb8r2"] Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.619381 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6db55f868d-kb8r2" event={"ID":"57de2141-7360-4ac2-a13e-1e89f4bfc881","Type":"ContainerStarted","Data":"ec011a56391fd94d978df27ad72090eac17deb3609417132cf7ba148f6d9577b"} Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.621309 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c94454b9f-m7jtc" event={"ID":"880081ea-a53e-4a63-9ea8-031a62596458","Type":"ContainerStarted","Data":"b84f075b0347d41fb63765344faab340ba6290b5202456d3b197ecce19b019e5"} Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.657784 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fdc5c8998-xk56g"] Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.847424 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4d055f-cbd3-4a1f-8b27-dd85681dffff" path="/var/lib/kubelet/pods/1e4d055f-cbd3-4a1f-8b27-dd85681dffff/volumes" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.878345 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-697fbb5cdf-v5p2z"] Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.878523 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" podUID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" containerName="heat-cfnapi" containerID="cri-o://82f6971e024872c0e78cd2a79c11855488d5ba73996f4cbc7ce2d3a0749c1fda" gracePeriod=60 Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.908715 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-9fcdf944-5vdsm"] Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.909017 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-9fcdf944-5vdsm" podUID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" containerName="heat-api" containerID="cri-o://6f5cc2f5bec220a949fb36522ebad4722d02368590df0247cbaab8d9be7d642b" gracePeriod=60 Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.920241 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-9fcdf944-5vdsm" podUID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.148:8004/healthcheck\": EOF" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.937880 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" podUID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.147:8000/healthcheck\": EOF" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.955589 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6b79f4dfb5-5wf2j"] Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.957437 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.963019 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.965579 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.979938 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78dc589b44-6kdjt"] Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.981785 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.984738 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.985112 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.995557 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-internal-tls-certs\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.995687 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-config-data\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.995780 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-config-data-custom\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.995862 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8st\" (UniqueName: \"kubernetes.io/projected/094d987b-cf37-440a-b435-4083f47919bc-kube-api-access-9c8st\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.995982 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-combined-ca-bundle\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:35 crc kubenswrapper[4982]: I0123 09:41:35.996078 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-public-tls-certs\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.031489 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b79f4dfb5-5wf2j"] Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.041778 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78dc589b44-6kdjt"] Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098306 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-combined-ca-bundle\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098355 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-public-tls-certs\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098410 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-public-tls-certs\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098451 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwkg\" (UniqueName: \"kubernetes.io/projected/733f4324-f19a-4d66-b052-a58baa74f275-kube-api-access-xkwkg\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098475 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-config-data-custom\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098496 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-internal-tls-certs\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098597 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-internal-tls-certs\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098647 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-config-data\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098716 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-config-data\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098742 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-combined-ca-bundle\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098784 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-config-data-custom\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.098824 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8st\" (UniqueName: \"kubernetes.io/projected/094d987b-cf37-440a-b435-4083f47919bc-kube-api-access-9c8st\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.106587 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-internal-tls-certs\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.106962 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-config-data\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.107227 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-public-tls-certs\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.107428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-config-data-custom\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.123332 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8st\" (UniqueName: \"kubernetes.io/projected/094d987b-cf37-440a-b435-4083f47919bc-kube-api-access-9c8st\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.125352 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/094d987b-cf37-440a-b435-4083f47919bc-combined-ca-bundle\") pod \"heat-cfnapi-6b79f4dfb5-5wf2j\" (UID: \"094d987b-cf37-440a-b435-4083f47919bc\") " pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.203884 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-config-data\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.203967 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-combined-ca-bundle\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.204040 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-public-tls-certs\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.204092 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkwkg\" (UniqueName: \"kubernetes.io/projected/733f4324-f19a-4d66-b052-a58baa74f275-kube-api-access-xkwkg\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.204110 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-config-data-custom\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.204125 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-internal-tls-certs\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.213457 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-combined-ca-bundle\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.213655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-public-tls-certs\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.214158 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-internal-tls-certs\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.215330 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-config-data\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.223545 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/733f4324-f19a-4d66-b052-a58baa74f275-config-data-custom\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.230245 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkwkg\" (UniqueName: \"kubernetes.io/projected/733f4324-f19a-4d66-b052-a58baa74f275-kube-api-access-xkwkg\") pod \"heat-api-78dc589b44-6kdjt\" (UID: \"733f4324-f19a-4d66-b052-a58baa74f275\") " pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.300215 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.308332 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.636086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6c94454b9f-m7jtc" event={"ID":"880081ea-a53e-4a63-9ea8-031a62596458","Type":"ContainerStarted","Data":"0738af036030b3298fff8d26518ab3bc9efd74f2a1f008c47082f08c8adfdeac"} Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.637253 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.643760 4982 generic.go:334] "Generic (PLEG): container finished" podID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerID="a521ea4563ebc82a81d3a6e4c9e378ad43b6ef4a07e5f5b3754aeecf6d666280" exitCode=1 Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.644479 4982 scope.go:117] "RemoveContainer" containerID="a521ea4563ebc82a81d3a6e4c9e378ad43b6ef4a07e5f5b3754aeecf6d666280" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.644799 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6db55f868d-kb8r2" event={"ID":"57de2141-7360-4ac2-a13e-1e89f4bfc881","Type":"ContainerDied","Data":"a521ea4563ebc82a81d3a6e4c9e378ad43b6ef4a07e5f5b3754aeecf6d666280"} Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.666072 4982 generic.go:334] "Generic (PLEG): container finished" podID="77758066-fa47-443d-bb66-9f05cae402bb" containerID="aa25f47e3e43958cb839abc372aadcf3e39fc462b25daef8d0afc7ff2e8691e6" exitCode=1 Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.666115 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" event={"ID":"77758066-fa47-443d-bb66-9f05cae402bb","Type":"ContainerDied","Data":"aa25f47e3e43958cb839abc372aadcf3e39fc462b25daef8d0afc7ff2e8691e6"} Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.666140 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" event={"ID":"77758066-fa47-443d-bb66-9f05cae402bb","Type":"ContainerStarted","Data":"c05b7c373ae7a2df76123cbc3736eceed52bdc27db0940d5b126f2492a5cc18a"} Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.666960 4982 scope.go:117] "RemoveContainer" containerID="aa25f47e3e43958cb839abc372aadcf3e39fc462b25daef8d0afc7ff2e8691e6" Jan 23 09:41:36 crc kubenswrapper[4982]: I0123 09:41:36.689546 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6c94454b9f-m7jtc" podStartSLOduration=2.689530441 podStartE2EDuration="2.689530441s" podCreationTimestamp="2026-01-23 09:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:41:36.663632599 +0000 UTC m=+6371.143995985" watchObservedRunningTime="2026-01-23 09:41:36.689530441 +0000 UTC m=+6371.169893827" Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.001827 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6b79f4dfb5-5wf2j"] Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.211761 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.254361 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78dc589b44-6kdjt"] Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.694072 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" event={"ID":"77758066-fa47-443d-bb66-9f05cae402bb","Type":"ContainerStarted","Data":"60f737ef7908a58c66aee3da973f0783680ea9fa2fdd9749f153b5ebf472a4aa"} Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.694392 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.701029 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78dc589b44-6kdjt" event={"ID":"733f4324-f19a-4d66-b052-a58baa74f275","Type":"ContainerStarted","Data":"3676dbbd856173938a8895cce27c18db0fcb8df41f27dabca2274e1042575337"} Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.702470 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" event={"ID":"094d987b-cf37-440a-b435-4083f47919bc","Type":"ContainerStarted","Data":"6a3e5715e9beb8f4298db2ded5bccb9f9f4d8cae22f83953b265247ed5ce75d0"} Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.724320 4982 generic.go:334] "Generic (PLEG): container finished" podID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerID="09011e203bcddaf5b6a47ef7515d266da4d75db4d01c8354d64b10b495bd0adc" exitCode=1 Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.725909 4982 scope.go:117] "RemoveContainer" containerID="09011e203bcddaf5b6a47ef7515d266da4d75db4d01c8354d64b10b495bd0adc" Jan 23 09:41:37 crc kubenswrapper[4982]: E0123 09:41:37.726190 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6db55f868d-kb8r2_openstack(57de2141-7360-4ac2-a13e-1e89f4bfc881)\"" pod="openstack/heat-api-6db55f868d-kb8r2" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.726224 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6db55f868d-kb8r2" event={"ID":"57de2141-7360-4ac2-a13e-1e89f4bfc881","Type":"ContainerDied","Data":"09011e203bcddaf5b6a47ef7515d266da4d75db4d01c8354d64b10b495bd0adc"} Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.726266 4982 scope.go:117] "RemoveContainer" containerID="a521ea4563ebc82a81d3a6e4c9e378ad43b6ef4a07e5f5b3754aeecf6d666280" Jan 23 09:41:37 crc kubenswrapper[4982]: I0123 09:41:37.741201 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" podStartSLOduration=3.741177918 podStartE2EDuration="3.741177918s" podCreationTimestamp="2026-01-23 09:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:41:37.734127994 +0000 UTC m=+6372.214491400" watchObservedRunningTime="2026-01-23 09:41:37.741177918 +0000 UTC m=+6372.221541314" Jan 23 09:41:38 crc kubenswrapper[4982]: I0123 09:41:38.737224 4982 scope.go:117] "RemoveContainer" containerID="09011e203bcddaf5b6a47ef7515d266da4d75db4d01c8354d64b10b495bd0adc" Jan 23 09:41:38 crc kubenswrapper[4982]: E0123 09:41:38.737760 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6db55f868d-kb8r2_openstack(57de2141-7360-4ac2-a13e-1e89f4bfc881)\"" pod="openstack/heat-api-6db55f868d-kb8r2" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.751667 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" event={"ID":"094d987b-cf37-440a-b435-4083f47919bc","Type":"ContainerStarted","Data":"63cef888f91bba339ec4c052021af122ee3c3d373885dbd80c4d4e91140a7fba"} Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.752523 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.758924 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78dc589b44-6kdjt" event={"ID":"733f4324-f19a-4d66-b052-a58baa74f275","Type":"ContainerStarted","Data":"74471dd0b0e96220e5cebc37c50ea5a9ef0c66079ab542863a440ac077c0967c"} Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.759087 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.790927 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" podStartSLOduration=4.7909039 podStartE2EDuration="4.7909039s" podCreationTimestamp="2026-01-23 09:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:41:39.776015731 +0000 UTC m=+6374.256379137" watchObservedRunningTime="2026-01-23 09:41:39.7909039 +0000 UTC m=+6374.271267296" Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.810040 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78dc589b44-6kdjt" podStartSLOduration=4.8100145560000005 podStartE2EDuration="4.810014556s" podCreationTimestamp="2026-01-23 09:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:41:39.797670646 +0000 UTC m=+6374.278034032" watchObservedRunningTime="2026-01-23 09:41:39.810014556 +0000 UTC m=+6374.290377952" Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.843394 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.843722 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:39 crc kubenswrapper[4982]: I0123 09:41:39.844826 4982 scope.go:117] "RemoveContainer" containerID="09011e203bcddaf5b6a47ef7515d266da4d75db4d01c8354d64b10b495bd0adc" Jan 23 09:41:39 crc kubenswrapper[4982]: E0123 09:41:39.845115 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6db55f868d-kb8r2_openstack(57de2141-7360-4ac2-a13e-1e89f4bfc881)\"" pod="openstack/heat-api-6db55f868d-kb8r2" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.176035 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5977db955d-zpcv9" Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.249449 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8647f885d6-jbzv7"] Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.249857 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8647f885d6-jbzv7" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon-log" containerID="cri-o://64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8" gracePeriod=30 Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.250047 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8647f885d6-jbzv7" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" containerID="cri-o://6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf" gracePeriod=30 Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.785330 4982 generic.go:334] "Generic (PLEG): container finished" podID="77758066-fa47-443d-bb66-9f05cae402bb" containerID="60f737ef7908a58c66aee3da973f0783680ea9fa2fdd9749f153b5ebf472a4aa" exitCode=1 Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.785450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" event={"ID":"77758066-fa47-443d-bb66-9f05cae402bb","Type":"ContainerDied","Data":"60f737ef7908a58c66aee3da973f0783680ea9fa2fdd9749f153b5ebf472a4aa"} Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.785506 4982 scope.go:117] "RemoveContainer" containerID="aa25f47e3e43958cb839abc372aadcf3e39fc462b25daef8d0afc7ff2e8691e6" Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.788722 4982 scope.go:117] "RemoveContainer" containerID="60f737ef7908a58c66aee3da973f0783680ea9fa2fdd9749f153b5ebf472a4aa" Jan 23 09:41:40 crc kubenswrapper[4982]: I0123 09:41:40.789070 4982 scope.go:117] "RemoveContainer" containerID="09011e203bcddaf5b6a47ef7515d266da4d75db4d01c8354d64b10b495bd0adc" Jan 23 09:41:40 crc kubenswrapper[4982]: E0123 09:41:40.789345 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6db55f868d-kb8r2_openstack(57de2141-7360-4ac2-a13e-1e89f4bfc881)\"" pod="openstack/heat-api-6db55f868d-kb8r2" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" Jan 23 09:41:40 crc kubenswrapper[4982]: E0123 09:41:40.794052 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5fdc5c8998-xk56g_openstack(77758066-fa47-443d-bb66-9f05cae402bb)\"" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" podUID="77758066-fa47-443d-bb66-9f05cae402bb" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.327397 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-9fcdf944-5vdsm" podUID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.1.148:8004/healthcheck\": read tcp 10.217.0.2:47582->10.217.1.148:8004: read: connection reset by peer" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.346027 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" podUID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.1.147:8000/healthcheck\": read tcp 10.217.0.2:37908->10.217.1.147:8000: read: connection reset by peer" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.435573 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.435645 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.801739 4982 generic.go:334] "Generic (PLEG): container finished" podID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" containerID="82f6971e024872c0e78cd2a79c11855488d5ba73996f4cbc7ce2d3a0749c1fda" exitCode=0 Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.801891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" event={"ID":"43aeb060-9827-42ba-9e0a-e8c3b7147dfa","Type":"ContainerDied","Data":"82f6971e024872c0e78cd2a79c11855488d5ba73996f4cbc7ce2d3a0749c1fda"} Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.808240 4982 generic.go:334] "Generic (PLEG): container finished" podID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" containerID="6f5cc2f5bec220a949fb36522ebad4722d02368590df0247cbaab8d9be7d642b" exitCode=0 Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.808306 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9fcdf944-5vdsm" event={"ID":"e67bef95-3f1a-4ebd-920f-eb5062c3eb52","Type":"ContainerDied","Data":"6f5cc2f5bec220a949fb36522ebad4722d02368590df0247cbaab8d9be7d642b"} Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.935125 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.940915 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954069 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data-custom\") pod \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954151 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x2k7\" (UniqueName: \"kubernetes.io/projected/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-kube-api-access-8x2k7\") pod \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954194 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-combined-ca-bundle\") pod \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954212 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data-custom\") pod \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954291 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data\") pod \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954309 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-combined-ca-bundle\") pod \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954331 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data\") pod \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\" (UID: \"43aeb060-9827-42ba-9e0a-e8c3b7147dfa\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.954392 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjhwd\" (UniqueName: \"kubernetes.io/projected/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-kube-api-access-gjhwd\") pod \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\" (UID: \"e67bef95-3f1a-4ebd-920f-eb5062c3eb52\") " Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.961101 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-kube-api-access-gjhwd" (OuterVolumeSpecName: "kube-api-access-gjhwd") pod "e67bef95-3f1a-4ebd-920f-eb5062c3eb52" (UID: "e67bef95-3f1a-4ebd-920f-eb5062c3eb52"). InnerVolumeSpecName "kube-api-access-gjhwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.964528 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43aeb060-9827-42ba-9e0a-e8c3b7147dfa" (UID: "43aeb060-9827-42ba-9e0a-e8c3b7147dfa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.967199 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-kube-api-access-8x2k7" (OuterVolumeSpecName: "kube-api-access-8x2k7") pod "43aeb060-9827-42ba-9e0a-e8c3b7147dfa" (UID: "43aeb060-9827-42ba-9e0a-e8c3b7147dfa"). InnerVolumeSpecName "kube-api-access-8x2k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:41 crc kubenswrapper[4982]: I0123 09:41:41.985081 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e67bef95-3f1a-4ebd-920f-eb5062c3eb52" (UID: "e67bef95-3f1a-4ebd-920f-eb5062c3eb52"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.024052 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43aeb060-9827-42ba-9e0a-e8c3b7147dfa" (UID: "43aeb060-9827-42ba-9e0a-e8c3b7147dfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.032451 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e67bef95-3f1a-4ebd-920f-eb5062c3eb52" (UID: "e67bef95-3f1a-4ebd-920f-eb5062c3eb52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.056559 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjhwd\" (UniqueName: \"kubernetes.io/projected/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-kube-api-access-gjhwd\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.057414 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.057514 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x2k7\" (UniqueName: \"kubernetes.io/projected/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-kube-api-access-8x2k7\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.057592 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.058638 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.058743 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.067800 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data" (OuterVolumeSpecName: "config-data") pod "e67bef95-3f1a-4ebd-920f-eb5062c3eb52" (UID: "e67bef95-3f1a-4ebd-920f-eb5062c3eb52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.089054 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data" (OuterVolumeSpecName: "config-data") pod "43aeb060-9827-42ba-9e0a-e8c3b7147dfa" (UID: "43aeb060-9827-42ba-9e0a-e8c3b7147dfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.160877 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e67bef95-3f1a-4ebd-920f-eb5062c3eb52-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.160920 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43aeb060-9827-42ba-9e0a-e8c3b7147dfa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.830661 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.830660 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697fbb5cdf-v5p2z" event={"ID":"43aeb060-9827-42ba-9e0a-e8c3b7147dfa","Type":"ContainerDied","Data":"7145000cc0bbf7a94d575cf3ab0dd66fce83f92da3d984657df2f0af41b52a62"} Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.831036 4982 scope.go:117] "RemoveContainer" containerID="82f6971e024872c0e78cd2a79c11855488d5ba73996f4cbc7ce2d3a0749c1fda" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.834067 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9fcdf944-5vdsm" event={"ID":"e67bef95-3f1a-4ebd-920f-eb5062c3eb52","Type":"ContainerDied","Data":"3fd8b7c7857748f04a85fda1d167b1fd8905f77471135f138aaf104a141ff7c5"} Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.834112 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9fcdf944-5vdsm" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.878889 4982 scope.go:117] "RemoveContainer" containerID="6f5cc2f5bec220a949fb36522ebad4722d02368590df0247cbaab8d9be7d642b" Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.880402 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-697fbb5cdf-v5p2z"] Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.893735 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-697fbb5cdf-v5p2z"] Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.908204 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-9fcdf944-5vdsm"] Jan 23 09:41:42 crc kubenswrapper[4982]: I0123 09:41:42.919469 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-9fcdf944-5vdsm"] Jan 23 09:41:43 crc kubenswrapper[4982]: I0123 09:41:43.844357 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" path="/var/lib/kubelet/pods/43aeb060-9827-42ba-9e0a-e8c3b7147dfa/volumes" Jan 23 09:41:43 crc kubenswrapper[4982]: I0123 09:41:43.845222 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" path="/var/lib/kubelet/pods/e67bef95-3f1a-4ebd-920f-eb5062c3eb52/volumes" Jan 23 09:41:43 crc kubenswrapper[4982]: I0123 09:41:43.856256 4982 generic.go:334] "Generic (PLEG): container finished" podID="928c735a-055d-42b8-9e73-5a963fa6df88" containerID="6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf" exitCode=0 Jan 23 09:41:43 crc kubenswrapper[4982]: I0123 09:41:43.856313 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647f885d6-jbzv7" event={"ID":"928c735a-055d-42b8-9e73-5a963fa6df88","Type":"ContainerDied","Data":"6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf"} Jan 23 09:41:44 crc kubenswrapper[4982]: I0123 09:41:44.906774 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:44 crc kubenswrapper[4982]: I0123 09:41:44.907563 4982 scope.go:117] "RemoveContainer" containerID="60f737ef7908a58c66aee3da973f0783680ea9fa2fdd9749f153b5ebf472a4aa" Jan 23 09:41:44 crc kubenswrapper[4982]: E0123 09:41:44.908057 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5fdc5c8998-xk56g_openstack(77758066-fa47-443d-bb66-9f05cae402bb)\"" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" podUID="77758066-fa47-443d-bb66-9f05cae402bb" Jan 23 09:41:45 crc kubenswrapper[4982]: I0123 09:41:45.433654 4982 scope.go:117] "RemoveContainer" containerID="e14355e90e6dd264bcc2375923b70e832db4cbd1de90d4598cefd60ca8e3ea24" Jan 23 09:41:45 crc kubenswrapper[4982]: I0123 09:41:45.564144 4982 scope.go:117] "RemoveContainer" containerID="4f0bd6446c2e09578acf7590e7a8ead2e77cbd78e6dd5bcde5ff2d51a43f95af" Jan 23 09:41:45 crc kubenswrapper[4982]: I0123 09:41:45.589133 4982 scope.go:117] "RemoveContainer" containerID="9a6261d8400f110a55ebd39650098ebe81199a32c40d4b69f0f2936678c54856" Jan 23 09:41:45 crc kubenswrapper[4982]: I0123 09:41:45.620851 4982 scope.go:117] "RemoveContainer" containerID="8bfd2cc213a7971067bf30d1738fdf7e03ae0ba5eee3d2f43c0f8550c9cae3c0" Jan 23 09:41:45 crc kubenswrapper[4982]: I0123 09:41:45.663405 4982 scope.go:117] "RemoveContainer" containerID="90c08413908ba442d9ec1a69270ee2c39da0354f4763b91763a46904084bb3f8" Jan 23 09:41:45 crc kubenswrapper[4982]: I0123 09:41:45.714578 4982 scope.go:117] "RemoveContainer" containerID="bc737ba8e930f8916c982a8d6d1504d9bb815ded2e4257843971c9a97795188c" Jan 23 09:41:45 crc kubenswrapper[4982]: I0123 09:41:45.739706 4982 scope.go:117] "RemoveContainer" containerID="07256719b9e57c70f2664360d43994f6b6e4aca95099e2b0afabd7d1a167f4f3" Jan 23 09:41:47 crc kubenswrapper[4982]: I0123 09:41:47.202873 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:41:47 crc kubenswrapper[4982]: I0123 09:41:47.743569 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78dc589b44-6kdjt" Jan 23 09:41:47 crc kubenswrapper[4982]: I0123 09:41:47.772909 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6b79f4dfb5-5wf2j" Jan 23 09:41:47 crc kubenswrapper[4982]: I0123 09:41:47.809851 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6db55f868d-kb8r2"] Jan 23 09:41:47 crc kubenswrapper[4982]: I0123 09:41:47.862486 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5fdc5c8998-xk56g"] Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.312703 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.320210 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.413814 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-combined-ca-bundle\") pod \"57de2141-7360-4ac2-a13e-1e89f4bfc881\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.413939 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhh25\" (UniqueName: \"kubernetes.io/projected/57de2141-7360-4ac2-a13e-1e89f4bfc881-kube-api-access-jhh25\") pod \"57de2141-7360-4ac2-a13e-1e89f4bfc881\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.413995 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data-custom\") pod \"57de2141-7360-4ac2-a13e-1e89f4bfc881\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.414033 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-combined-ca-bundle\") pod \"77758066-fa47-443d-bb66-9f05cae402bb\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.414338 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data-custom\") pod \"77758066-fa47-443d-bb66-9f05cae402bb\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.414361 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data\") pod \"77758066-fa47-443d-bb66-9f05cae402bb\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.414404 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q77p\" (UniqueName: \"kubernetes.io/projected/77758066-fa47-443d-bb66-9f05cae402bb-kube-api-access-8q77p\") pod \"77758066-fa47-443d-bb66-9f05cae402bb\" (UID: \"77758066-fa47-443d-bb66-9f05cae402bb\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.414443 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data\") pod \"57de2141-7360-4ac2-a13e-1e89f4bfc881\" (UID: \"57de2141-7360-4ac2-a13e-1e89f4bfc881\") " Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.422347 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57de2141-7360-4ac2-a13e-1e89f4bfc881-kube-api-access-jhh25" (OuterVolumeSpecName: "kube-api-access-jhh25") pod "57de2141-7360-4ac2-a13e-1e89f4bfc881" (UID: "57de2141-7360-4ac2-a13e-1e89f4bfc881"). InnerVolumeSpecName "kube-api-access-jhh25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.424762 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57de2141-7360-4ac2-a13e-1e89f4bfc881" (UID: "57de2141-7360-4ac2-a13e-1e89f4bfc881"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.434296 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "77758066-fa47-443d-bb66-9f05cae402bb" (UID: "77758066-fa47-443d-bb66-9f05cae402bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.438554 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77758066-fa47-443d-bb66-9f05cae402bb-kube-api-access-8q77p" (OuterVolumeSpecName: "kube-api-access-8q77p") pod "77758066-fa47-443d-bb66-9f05cae402bb" (UID: "77758066-fa47-443d-bb66-9f05cae402bb"). InnerVolumeSpecName "kube-api-access-8q77p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.483671 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57de2141-7360-4ac2-a13e-1e89f4bfc881" (UID: "57de2141-7360-4ac2-a13e-1e89f4bfc881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.492617 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77758066-fa47-443d-bb66-9f05cae402bb" (UID: "77758066-fa47-443d-bb66-9f05cae402bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.508581 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data" (OuterVolumeSpecName: "config-data") pod "77758066-fa47-443d-bb66-9f05cae402bb" (UID: "77758066-fa47-443d-bb66-9f05cae402bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.517183 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhh25\" (UniqueName: \"kubernetes.io/projected/57de2141-7360-4ac2-a13e-1e89f4bfc881-kube-api-access-jhh25\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.517218 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.517229 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.517238 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.517246 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77758066-fa47-443d-bb66-9f05cae402bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.517255 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q77p\" (UniqueName: \"kubernetes.io/projected/77758066-fa47-443d-bb66-9f05cae402bb-kube-api-access-8q77p\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.517263 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.523089 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data" (OuterVolumeSpecName: "config-data") pod "57de2141-7360-4ac2-a13e-1e89f4bfc881" (UID: "57de2141-7360-4ac2-a13e-1e89f4bfc881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.619105 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57de2141-7360-4ac2-a13e-1e89f4bfc881-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.935277 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6db55f868d-kb8r2" event={"ID":"57de2141-7360-4ac2-a13e-1e89f4bfc881","Type":"ContainerDied","Data":"ec011a56391fd94d978df27ad72090eac17deb3609417132cf7ba148f6d9577b"} Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.935327 4982 scope.go:117] "RemoveContainer" containerID="09011e203bcddaf5b6a47ef7515d266da4d75db4d01c8354d64b10b495bd0adc" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.935407 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6db55f868d-kb8r2" Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.964154 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" event={"ID":"77758066-fa47-443d-bb66-9f05cae402bb","Type":"ContainerDied","Data":"c05b7c373ae7a2df76123cbc3736eceed52bdc27db0940d5b126f2492a5cc18a"} Jan 23 09:41:48 crc kubenswrapper[4982]: I0123 09:41:48.964269 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fdc5c8998-xk56g" Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.013019 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6db55f868d-kb8r2"] Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.032500 4982 scope.go:117] "RemoveContainer" containerID="60f737ef7908a58c66aee3da973f0783680ea9fa2fdd9749f153b5ebf472a4aa" Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.044691 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6db55f868d-kb8r2"] Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.071898 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5fdc5c8998-xk56g"] Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.080503 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5fdc5c8998-xk56g"] Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.843085 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" path="/var/lib/kubelet/pods/57de2141-7360-4ac2-a13e-1e89f4bfc881/volumes" Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.843810 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77758066-fa47-443d-bb66-9f05cae402bb" path="/var/lib/kubelet/pods/77758066-fa47-443d-bb66-9f05cae402bb/volumes" Jan 23 09:41:49 crc kubenswrapper[4982]: I0123 09:41:49.917314 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8647f885d6-jbzv7" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.139:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8443: connect: connection refused" Jan 23 09:41:54 crc kubenswrapper[4982]: I0123 09:41:54.847873 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6c94454b9f-m7jtc" Jan 23 09:41:54 crc kubenswrapper[4982]: I0123 09:41:54.920072 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b54869999-pdzfb"] Jan 23 09:41:54 crc kubenswrapper[4982]: I0123 09:41:54.920349 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7b54869999-pdzfb" podUID="3bd2969b-823e-47c3-bbca-42d4f738df63" containerName="heat-engine" containerID="cri-o://951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" gracePeriod=60 Jan 23 09:41:57 crc kubenswrapper[4982]: E0123 09:41:57.162460 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 09:41:57 crc kubenswrapper[4982]: E0123 09:41:57.172078 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 09:41:57 crc kubenswrapper[4982]: E0123 09:41:57.196054 4982 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 09:41:57 crc kubenswrapper[4982]: E0123 09:41:57.196120 4982 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7b54869999-pdzfb" podUID="3bd2969b-823e-47c3-bbca-42d4f738df63" containerName="heat-engine" Jan 23 09:41:59 crc kubenswrapper[4982]: I0123 09:41:59.917341 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8647f885d6-jbzv7" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.139:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8443: connect: connection refused" Jan 23 09:42:01 crc kubenswrapper[4982]: I0123 09:42:01.934831 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:42:01 crc kubenswrapper[4982]: I0123 09:42:01.952233 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-combined-ca-bundle\") pod \"3bd2969b-823e-47c3-bbca-42d4f738df63\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " Jan 23 09:42:01 crc kubenswrapper[4982]: I0123 09:42:01.952324 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data\") pod \"3bd2969b-823e-47c3-bbca-42d4f738df63\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " Jan 23 09:42:01 crc kubenswrapper[4982]: I0123 09:42:01.954111 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llsc6\" (UniqueName: \"kubernetes.io/projected/3bd2969b-823e-47c3-bbca-42d4f738df63-kube-api-access-llsc6\") pod \"3bd2969b-823e-47c3-bbca-42d4f738df63\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " Jan 23 09:42:01 crc kubenswrapper[4982]: I0123 09:42:01.954270 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data-custom\") pod \"3bd2969b-823e-47c3-bbca-42d4f738df63\" (UID: \"3bd2969b-823e-47c3-bbca-42d4f738df63\") " Jan 23 09:42:01 crc kubenswrapper[4982]: I0123 09:42:01.964795 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd2969b-823e-47c3-bbca-42d4f738df63-kube-api-access-llsc6" (OuterVolumeSpecName: "kube-api-access-llsc6") pod "3bd2969b-823e-47c3-bbca-42d4f738df63" (UID: "3bd2969b-823e-47c3-bbca-42d4f738df63"). InnerVolumeSpecName "kube-api-access-llsc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:42:01 crc kubenswrapper[4982]: I0123 09:42:01.974869 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3bd2969b-823e-47c3-bbca-42d4f738df63" (UID: "3bd2969b-823e-47c3-bbca-42d4f738df63"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.011722 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bd2969b-823e-47c3-bbca-42d4f738df63" (UID: "3bd2969b-823e-47c3-bbca-42d4f738df63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.053861 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data" (OuterVolumeSpecName: "config-data") pod "3bd2969b-823e-47c3-bbca-42d4f738df63" (UID: "3bd2969b-823e-47c3-bbca-42d4f738df63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.058161 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llsc6\" (UniqueName: \"kubernetes.io/projected/3bd2969b-823e-47c3-bbca-42d4f738df63-kube-api-access-llsc6\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.058196 4982 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.058205 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.058216 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd2969b-823e-47c3-bbca-42d4f738df63-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.143885 4982 generic.go:334] "Generic (PLEG): container finished" podID="3bd2969b-823e-47c3-bbca-42d4f738df63" containerID="951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" exitCode=0 Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.143931 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b54869999-pdzfb" event={"ID":"3bd2969b-823e-47c3-bbca-42d4f738df63","Type":"ContainerDied","Data":"951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c"} Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.143961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b54869999-pdzfb" event={"ID":"3bd2969b-823e-47c3-bbca-42d4f738df63","Type":"ContainerDied","Data":"33da7f7eeda61a299527c9bf9f18340a4cbb5549304496b83ee97f4549adfb81"} Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.143976 4982 scope.go:117] "RemoveContainer" containerID="951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.144013 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b54869999-pdzfb" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.185659 4982 scope.go:117] "RemoveContainer" containerID="951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" Jan 23 09:42:02 crc kubenswrapper[4982]: E0123 09:42:02.186457 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c\": container with ID starting with 951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c not found: ID does not exist" containerID="951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.186497 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c"} err="failed to get container status \"951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c\": rpc error: code = NotFound desc = could not find container \"951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c\": container with ID starting with 951ee18cac7bcf8d63f6a6fbd3239e4c919fc67a61a2085d0aa872f85a32d96c not found: ID does not exist" Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.190450 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7b54869999-pdzfb"] Jan 23 09:42:02 crc kubenswrapper[4982]: I0123 09:42:02.200761 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7b54869999-pdzfb"] Jan 23 09:42:03 crc kubenswrapper[4982]: I0123 09:42:03.841364 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd2969b-823e-47c3-bbca-42d4f738df63" path="/var/lib/kubelet/pods/3bd2969b-823e-47c3-bbca-42d4f738df63/volumes" Jan 23 09:42:09 crc kubenswrapper[4982]: I0123 09:42:09.917191 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8647f885d6-jbzv7" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.139:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.139:8443: connect: connection refused" Jan 23 09:42:09 crc kubenswrapper[4982]: I0123 09:42:09.917955 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.649902 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.756862 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-tls-certs\") pod \"928c735a-055d-42b8-9e73-5a963fa6df88\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.756940 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-scripts\") pod \"928c735a-055d-42b8-9e73-5a963fa6df88\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.756973 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-config-data\") pod \"928c735a-055d-42b8-9e73-5a963fa6df88\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.757005 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-secret-key\") pod \"928c735a-055d-42b8-9e73-5a963fa6df88\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.757040 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q57s\" (UniqueName: \"kubernetes.io/projected/928c735a-055d-42b8-9e73-5a963fa6df88-kube-api-access-7q57s\") pod \"928c735a-055d-42b8-9e73-5a963fa6df88\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.757171 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-combined-ca-bundle\") pod \"928c735a-055d-42b8-9e73-5a963fa6df88\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.757364 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/928c735a-055d-42b8-9e73-5a963fa6df88-logs\") pod \"928c735a-055d-42b8-9e73-5a963fa6df88\" (UID: \"928c735a-055d-42b8-9e73-5a963fa6df88\") " Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.758470 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928c735a-055d-42b8-9e73-5a963fa6df88-logs" (OuterVolumeSpecName: "logs") pod "928c735a-055d-42b8-9e73-5a963fa6df88" (UID: "928c735a-055d-42b8-9e73-5a963fa6df88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.778313 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928c735a-055d-42b8-9e73-5a963fa6df88-kube-api-access-7q57s" (OuterVolumeSpecName: "kube-api-access-7q57s") pod "928c735a-055d-42b8-9e73-5a963fa6df88" (UID: "928c735a-055d-42b8-9e73-5a963fa6df88"). InnerVolumeSpecName "kube-api-access-7q57s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.778426 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "928c735a-055d-42b8-9e73-5a963fa6df88" (UID: "928c735a-055d-42b8-9e73-5a963fa6df88"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.786296 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-scripts" (OuterVolumeSpecName: "scripts") pod "928c735a-055d-42b8-9e73-5a963fa6df88" (UID: "928c735a-055d-42b8-9e73-5a963fa6df88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.795356 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "928c735a-055d-42b8-9e73-5a963fa6df88" (UID: "928c735a-055d-42b8-9e73-5a963fa6df88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.795846 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-config-data" (OuterVolumeSpecName: "config-data") pod "928c735a-055d-42b8-9e73-5a963fa6df88" (UID: "928c735a-055d-42b8-9e73-5a963fa6df88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.829327 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "928c735a-055d-42b8-9e73-5a963fa6df88" (UID: "928c735a-055d-42b8-9e73-5a963fa6df88"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.861338 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.861380 4982 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/928c735a-055d-42b8-9e73-5a963fa6df88-logs\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.861393 4982 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.861405 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.861419 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/928c735a-055d-42b8-9e73-5a963fa6df88-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.861430 4982 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/928c735a-055d-42b8-9e73-5a963fa6df88-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:10 crc kubenswrapper[4982]: I0123 09:42:10.861442 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q57s\" (UniqueName: \"kubernetes.io/projected/928c735a-055d-42b8-9e73-5a963fa6df88-kube-api-access-7q57s\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.227857 4982 generic.go:334] "Generic (PLEG): container finished" podID="928c735a-055d-42b8-9e73-5a963fa6df88" containerID="64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8" exitCode=137 Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.227900 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647f885d6-jbzv7" event={"ID":"928c735a-055d-42b8-9e73-5a963fa6df88","Type":"ContainerDied","Data":"64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8"} Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.227926 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8647f885d6-jbzv7" event={"ID":"928c735a-055d-42b8-9e73-5a963fa6df88","Type":"ContainerDied","Data":"5a7664a1bf6f432d6cffe5562174881dd7575e1e218c0c85246209277c068cba"} Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.227942 4982 scope.go:117] "RemoveContainer" containerID="6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.228066 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8647f885d6-jbzv7" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.269837 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8647f885d6-jbzv7"] Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.279540 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8647f885d6-jbzv7"] Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.391329 4982 scope.go:117] "RemoveContainer" containerID="64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.415639 4982 scope.go:117] "RemoveContainer" containerID="6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf" Jan 23 09:42:11 crc kubenswrapper[4982]: E0123 09:42:11.416095 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf\": container with ID starting with 6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf not found: ID does not exist" containerID="6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.416134 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf"} err="failed to get container status \"6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf\": rpc error: code = NotFound desc = could not find container \"6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf\": container with ID starting with 6c46e6883e0abf9751690de3a564a4bb7151443db0d64e589bddb90e62732faf not found: ID does not exist" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.416161 4982 scope.go:117] "RemoveContainer" containerID="64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8" Jan 23 09:42:11 crc kubenswrapper[4982]: E0123 09:42:11.416526 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8\": container with ID starting with 64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8 not found: ID does not exist" containerID="64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.416554 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8"} err="failed to get container status \"64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8\": rpc error: code = NotFound desc = could not find container \"64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8\": container with ID starting with 64b13b73febf2b828b213881b382775ab11c46ee9c1b3897c1a3978623ac8aa8 not found: ID does not exist" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.435807 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.435870 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:42:11 crc kubenswrapper[4982]: I0123 09:42:11.839957 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" path="/var/lib/kubelet/pods/928c735a-055d-42b8-9e73-5a963fa6df88/volumes" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.492045 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw"] Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493193 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77758066-fa47-443d-bb66-9f05cae402bb" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493214 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="77758066-fa47-443d-bb66-9f05cae402bb" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493247 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd2969b-823e-47c3-bbca-42d4f738df63" containerName="heat-engine" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493255 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd2969b-823e-47c3-bbca-42d4f738df63" containerName="heat-engine" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493267 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77758066-fa47-443d-bb66-9f05cae402bb" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493275 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="77758066-fa47-443d-bb66-9f05cae402bb" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493308 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon-log" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493315 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon-log" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493329 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493335 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493352 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493358 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493371 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493377 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493388 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493394 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: E0123 09:42:15.493405 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493411 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493657 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493675 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493684 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="928c735a-055d-42b8-9e73-5a963fa6df88" containerName="horizon-log" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493694 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="77758066-fa47-443d-bb66-9f05cae402bb" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493701 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd2969b-823e-47c3-bbca-42d4f738df63" containerName="heat-engine" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493711 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67bef95-3f1a-4ebd-920f-eb5062c3eb52" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.493723 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="43aeb060-9827-42ba-9e0a-e8c3b7147dfa" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.494171 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="57de2141-7360-4ac2-a13e-1e89f4bfc881" containerName="heat-api" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.494189 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="77758066-fa47-443d-bb66-9f05cae402bb" containerName="heat-cfnapi" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.495423 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.499078 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.501910 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw"] Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.561175 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsxj\" (UniqueName: \"kubernetes.io/projected/20080122-ebd1-4840-979b-77c6aa4d4896-kube-api-access-dmsxj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.561264 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.561297 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.662759 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmsxj\" (UniqueName: \"kubernetes.io/projected/20080122-ebd1-4840-979b-77c6aa4d4896-kube-api-access-dmsxj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.662874 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.662905 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.663408 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.663444 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.688938 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmsxj\" (UniqueName: \"kubernetes.io/projected/20080122-ebd1-4840-979b-77c6aa4d4896-kube-api-access-dmsxj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:15 crc kubenswrapper[4982]: I0123 09:42:15.829510 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:16 crc kubenswrapper[4982]: I0123 09:42:16.316195 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw"] Jan 23 09:42:17 crc kubenswrapper[4982]: I0123 09:42:17.294975 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" event={"ID":"20080122-ebd1-4840-979b-77c6aa4d4896","Type":"ContainerStarted","Data":"6f08d98e797b499d324b7cca40d048c68b42f09490bfb4cb2830bc9b4a3be09c"} Jan 23 09:42:18 crc kubenswrapper[4982]: I0123 09:42:18.305637 4982 generic.go:334] "Generic (PLEG): container finished" podID="20080122-ebd1-4840-979b-77c6aa4d4896" containerID="fd1a9908f68ec903e180a2b0abfeef2643167960fb94898a2806de3e65b4504e" exitCode=0 Jan 23 09:42:18 crc kubenswrapper[4982]: I0123 09:42:18.305680 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" event={"ID":"20080122-ebd1-4840-979b-77c6aa4d4896","Type":"ContainerDied","Data":"fd1a9908f68ec903e180a2b0abfeef2643167960fb94898a2806de3e65b4504e"} Jan 23 09:42:20 crc kubenswrapper[4982]: I0123 09:42:20.331896 4982 generic.go:334] "Generic (PLEG): container finished" podID="20080122-ebd1-4840-979b-77c6aa4d4896" containerID="dd85cb9fc4a7952a53be9b1a475df8ee1f58778859e335afde4e3cee5cfe2fcf" exitCode=0 Jan 23 09:42:20 crc kubenswrapper[4982]: I0123 09:42:20.332545 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" event={"ID":"20080122-ebd1-4840-979b-77c6aa4d4896","Type":"ContainerDied","Data":"dd85cb9fc4a7952a53be9b1a475df8ee1f58778859e335afde4e3cee5cfe2fcf"} Jan 23 09:42:21 crc kubenswrapper[4982]: I0123 09:42:21.345055 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" event={"ID":"20080122-ebd1-4840-979b-77c6aa4d4896","Type":"ContainerStarted","Data":"ba3bdda7bf40f768b5656f5116671fd0ba8d772d961f639ef81cc85a219b5fce"} Jan 23 09:42:21 crc kubenswrapper[4982]: I0123 09:42:21.367042 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" podStartSLOduration=5.265677836 podStartE2EDuration="6.367026509s" podCreationTimestamp="2026-01-23 09:42:15 +0000 UTC" firstStartedPulling="2026-01-23 09:42:18.307403338 +0000 UTC m=+6412.787766714" lastFinishedPulling="2026-01-23 09:42:19.408752001 +0000 UTC m=+6413.889115387" observedRunningTime="2026-01-23 09:42:21.364084768 +0000 UTC m=+6415.844448154" watchObservedRunningTime="2026-01-23 09:42:21.367026509 +0000 UTC m=+6415.847389895" Jan 23 09:42:22 crc kubenswrapper[4982]: I0123 09:42:22.361790 4982 generic.go:334] "Generic (PLEG): container finished" podID="20080122-ebd1-4840-979b-77c6aa4d4896" containerID="ba3bdda7bf40f768b5656f5116671fd0ba8d772d961f639ef81cc85a219b5fce" exitCode=0 Jan 23 09:42:22 crc kubenswrapper[4982]: I0123 09:42:22.361839 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" event={"ID":"20080122-ebd1-4840-979b-77c6aa4d4896","Type":"ContainerDied","Data":"ba3bdda7bf40f768b5656f5116671fd0ba8d772d961f639ef81cc85a219b5fce"} Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.709795 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.835658 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmsxj\" (UniqueName: \"kubernetes.io/projected/20080122-ebd1-4840-979b-77c6aa4d4896-kube-api-access-dmsxj\") pod \"20080122-ebd1-4840-979b-77c6aa4d4896\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.836126 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-bundle\") pod \"20080122-ebd1-4840-979b-77c6aa4d4896\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.836310 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-util\") pod \"20080122-ebd1-4840-979b-77c6aa4d4896\" (UID: \"20080122-ebd1-4840-979b-77c6aa4d4896\") " Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.839949 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-bundle" (OuterVolumeSpecName: "bundle") pod "20080122-ebd1-4840-979b-77c6aa4d4896" (UID: "20080122-ebd1-4840-979b-77c6aa4d4896"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.842446 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20080122-ebd1-4840-979b-77c6aa4d4896-kube-api-access-dmsxj" (OuterVolumeSpecName: "kube-api-access-dmsxj") pod "20080122-ebd1-4840-979b-77c6aa4d4896" (UID: "20080122-ebd1-4840-979b-77c6aa4d4896"). InnerVolumeSpecName "kube-api-access-dmsxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.849748 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-util" (OuterVolumeSpecName: "util") pod "20080122-ebd1-4840-979b-77c6aa4d4896" (UID: "20080122-ebd1-4840-979b-77c6aa4d4896"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.939402 4982 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.939451 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmsxj\" (UniqueName: \"kubernetes.io/projected/20080122-ebd1-4840-979b-77c6aa4d4896-kube-api-access-dmsxj\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:23 crc kubenswrapper[4982]: I0123 09:42:23.939468 4982 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/20080122-ebd1-4840-979b-77c6aa4d4896-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:42:24 crc kubenswrapper[4982]: I0123 09:42:24.389969 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" event={"ID":"20080122-ebd1-4840-979b-77c6aa4d4896","Type":"ContainerDied","Data":"6f08d98e797b499d324b7cca40d048c68b42f09490bfb4cb2830bc9b4a3be09c"} Jan 23 09:42:24 crc kubenswrapper[4982]: I0123 09:42:24.390020 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f08d98e797b499d324b7cca40d048c68b42f09490bfb4cb2830bc9b4a3be09c" Jan 23 09:42:24 crc kubenswrapper[4982]: I0123 09:42:24.390108 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.093673 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4"] Jan 23 09:42:35 crc kubenswrapper[4982]: E0123 09:42:35.095647 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20080122-ebd1-4840-979b-77c6aa4d4896" containerName="util" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.095746 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="20080122-ebd1-4840-979b-77c6aa4d4896" containerName="util" Jan 23 09:42:35 crc kubenswrapper[4982]: E0123 09:42:35.095844 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20080122-ebd1-4840-979b-77c6aa4d4896" containerName="extract" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.095913 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="20080122-ebd1-4840-979b-77c6aa4d4896" containerName="extract" Jan 23 09:42:35 crc kubenswrapper[4982]: E0123 09:42:35.096013 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20080122-ebd1-4840-979b-77c6aa4d4896" containerName="pull" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.096089 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="20080122-ebd1-4840-979b-77c6aa4d4896" containerName="pull" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.096421 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="20080122-ebd1-4840-979b-77c6aa4d4896" containerName="extract" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.097450 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.101158 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-zb5lq" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.101855 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.102111 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.107152 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.155356 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.157892 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.165870 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.168557 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-5z72d" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.169534 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.179990 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.181527 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.221695 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.226481 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2vmm\" (UniqueName: \"kubernetes.io/projected/f469aaaa-fc2d-4b94-8686-754943207b00-kube-api-access-g2vmm\") pod \"obo-prometheus-operator-68bc856cb9-q4hk4\" (UID: \"f469aaaa-fc2d-4b94-8686-754943207b00\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.330616 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dqwbd"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.332524 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.333859 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b55ae8cf-edc4-4236-999e-e9db7006318f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7\" (UID: \"b55ae8cf-edc4-4236-999e-e9db7006318f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.334063 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b55ae8cf-edc4-4236-999e-e9db7006318f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7\" (UID: \"b55ae8cf-edc4-4236-999e-e9db7006318f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.334269 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2vmm\" (UniqueName: \"kubernetes.io/projected/f469aaaa-fc2d-4b94-8686-754943207b00-kube-api-access-g2vmm\") pod \"obo-prometheus-operator-68bc856cb9-q4hk4\" (UID: \"f469aaaa-fc2d-4b94-8686-754943207b00\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.334423 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e540fff1-acc7-4d53-8691-95f7e6ac66b2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-xcv49\" (UID: \"e540fff1-acc7-4d53-8691-95f7e6ac66b2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.334539 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e540fff1-acc7-4d53-8691-95f7e6ac66b2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-xcv49\" (UID: \"e540fff1-acc7-4d53-8691-95f7e6ac66b2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.335698 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-wrg6h" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.340065 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.347521 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dqwbd"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.373296 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2vmm\" (UniqueName: \"kubernetes.io/projected/f469aaaa-fc2d-4b94-8686-754943207b00-kube-api-access-g2vmm\") pod \"obo-prometheus-operator-68bc856cb9-q4hk4\" (UID: \"f469aaaa-fc2d-4b94-8686-754943207b00\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.436614 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf92fd73-e352-4046-8990-ada508e159a0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dqwbd\" (UID: \"bf92fd73-e352-4046-8990-ada508e159a0\") " pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.436813 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e540fff1-acc7-4d53-8691-95f7e6ac66b2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-xcv49\" (UID: \"e540fff1-acc7-4d53-8691-95f7e6ac66b2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.436866 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e540fff1-acc7-4d53-8691-95f7e6ac66b2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-xcv49\" (UID: \"e540fff1-acc7-4d53-8691-95f7e6ac66b2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.436979 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b55ae8cf-edc4-4236-999e-e9db7006318f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7\" (UID: \"b55ae8cf-edc4-4236-999e-e9db7006318f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.437033 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6hjd\" (UniqueName: \"kubernetes.io/projected/bf92fd73-e352-4046-8990-ada508e159a0-kube-api-access-q6hjd\") pod \"observability-operator-59bdc8b94-dqwbd\" (UID: \"bf92fd73-e352-4046-8990-ada508e159a0\") " pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.437120 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b55ae8cf-edc4-4236-999e-e9db7006318f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7\" (UID: \"b55ae8cf-edc4-4236-999e-e9db7006318f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.443326 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b55ae8cf-edc4-4236-999e-e9db7006318f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7\" (UID: \"b55ae8cf-edc4-4236-999e-e9db7006318f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.448343 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e540fff1-acc7-4d53-8691-95f7e6ac66b2-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-xcv49\" (UID: \"e540fff1-acc7-4d53-8691-95f7e6ac66b2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.452241 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e540fff1-acc7-4d53-8691-95f7e6ac66b2-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-xcv49\" (UID: \"e540fff1-acc7-4d53-8691-95f7e6ac66b2\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.471830 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b55ae8cf-edc4-4236-999e-e9db7006318f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7\" (UID: \"b55ae8cf-edc4-4236-999e-e9db7006318f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.476980 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.497191 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.512118 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.538947 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6hjd\" (UniqueName: \"kubernetes.io/projected/bf92fd73-e352-4046-8990-ada508e159a0-kube-api-access-q6hjd\") pod \"observability-operator-59bdc8b94-dqwbd\" (UID: \"bf92fd73-e352-4046-8990-ada508e159a0\") " pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.539078 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf92fd73-e352-4046-8990-ada508e159a0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dqwbd\" (UID: \"bf92fd73-e352-4046-8990-ada508e159a0\") " pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.554542 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf92fd73-e352-4046-8990-ada508e159a0-observability-operator-tls\") pod \"observability-operator-59bdc8b94-dqwbd\" (UID: \"bf92fd73-e352-4046-8990-ada508e159a0\") " pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.576490 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6hjd\" (UniqueName: \"kubernetes.io/projected/bf92fd73-e352-4046-8990-ada508e159a0-kube-api-access-q6hjd\") pod \"observability-operator-59bdc8b94-dqwbd\" (UID: \"bf92fd73-e352-4046-8990-ada508e159a0\") " pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.598681 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8snlk"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.600195 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.607565 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-hh47q" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.625178 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8snlk"] Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.657598 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.748690 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nbxl\" (UniqueName: \"kubernetes.io/projected/5abb74fc-c030-4b49-a655-8ef50e5a4a6a-kube-api-access-9nbxl\") pod \"perses-operator-5bf474d74f-8snlk\" (UID: \"5abb74fc-c030-4b49-a655-8ef50e5a4a6a\") " pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.748821 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abb74fc-c030-4b49-a655-8ef50e5a4a6a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8snlk\" (UID: \"5abb74fc-c030-4b49-a655-8ef50e5a4a6a\") " pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.890853 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nbxl\" (UniqueName: \"kubernetes.io/projected/5abb74fc-c030-4b49-a655-8ef50e5a4a6a-kube-api-access-9nbxl\") pod \"perses-operator-5bf474d74f-8snlk\" (UID: \"5abb74fc-c030-4b49-a655-8ef50e5a4a6a\") " pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.891276 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abb74fc-c030-4b49-a655-8ef50e5a4a6a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8snlk\" (UID: \"5abb74fc-c030-4b49-a655-8ef50e5a4a6a\") " pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.892571 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abb74fc-c030-4b49-a655-8ef50e5a4a6a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-8snlk\" (UID: \"5abb74fc-c030-4b49-a655-8ef50e5a4a6a\") " pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:35 crc kubenswrapper[4982]: I0123 09:42:35.925403 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nbxl\" (UniqueName: \"kubernetes.io/projected/5abb74fc-c030-4b49-a655-8ef50e5a4a6a-kube-api-access-9nbxl\") pod \"perses-operator-5bf474d74f-8snlk\" (UID: \"5abb74fc-c030-4b49-a655-8ef50e5a4a6a\") " pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.006505 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.482930 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7"] Jan 23 09:42:36 crc kubenswrapper[4982]: W0123 09:42:36.604960 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode540fff1_acc7_4d53_8691_95f7e6ac66b2.slice/crio-a1e267b3e05aab5388d72ddf9b9bd00b8585bf474488755f987f039e38a334df WatchSource:0}: Error finding container a1e267b3e05aab5388d72ddf9b9bd00b8585bf474488755f987f039e38a334df: Status 404 returned error can't find the container with id a1e267b3e05aab5388d72ddf9b9bd00b8585bf474488755f987f039e38a334df Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.623806 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49"] Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.662805 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" event={"ID":"b55ae8cf-edc4-4236-999e-e9db7006318f","Type":"ContainerStarted","Data":"109d39bc0b347086f0ff82c2556f0ade8a29d19670eff078ce08ca00d4744434"} Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.665140 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" event={"ID":"e540fff1-acc7-4d53-8691-95f7e6ac66b2","Type":"ContainerStarted","Data":"a1e267b3e05aab5388d72ddf9b9bd00b8585bf474488755f987f039e38a334df"} Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.671561 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4"] Jan 23 09:42:36 crc kubenswrapper[4982]: W0123 09:42:36.703996 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf92fd73_e352_4046_8990_ada508e159a0.slice/crio-1e472966adbac8a5ecbccccbfdc225982def0c6e8c7ce2918f984e91429bb2a6 WatchSource:0}: Error finding container 1e472966adbac8a5ecbccccbfdc225982def0c6e8c7ce2918f984e91429bb2a6: Status 404 returned error can't find the container with id 1e472966adbac8a5ecbccccbfdc225982def0c6e8c7ce2918f984e91429bb2a6 Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.707612 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-dqwbd"] Jan 23 09:42:36 crc kubenswrapper[4982]: I0123 09:42:36.837082 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-8snlk"] Jan 23 09:42:36 crc kubenswrapper[4982]: W0123 09:42:36.839469 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5abb74fc_c030_4b49_a655_8ef50e5a4a6a.slice/crio-e42379bfdd5e1b2f78f6c03287417d3e2eceb6c2cb7aa86ad459c58e12c43fb0 WatchSource:0}: Error finding container e42379bfdd5e1b2f78f6c03287417d3e2eceb6c2cb7aa86ad459c58e12c43fb0: Status 404 returned error can't find the container with id e42379bfdd5e1b2f78f6c03287417d3e2eceb6c2cb7aa86ad459c58e12c43fb0 Jan 23 09:42:37 crc kubenswrapper[4982]: I0123 09:42:37.029996 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vh2xw"] Jan 23 09:42:37 crc kubenswrapper[4982]: I0123 09:42:37.041984 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vh2xw"] Jan 23 09:42:37 crc kubenswrapper[4982]: I0123 09:42:37.678937 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8snlk" event={"ID":"5abb74fc-c030-4b49-a655-8ef50e5a4a6a","Type":"ContainerStarted","Data":"e42379bfdd5e1b2f78f6c03287417d3e2eceb6c2cb7aa86ad459c58e12c43fb0"} Jan 23 09:42:37 crc kubenswrapper[4982]: I0123 09:42:37.681713 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" event={"ID":"bf92fd73-e352-4046-8990-ada508e159a0","Type":"ContainerStarted","Data":"1e472966adbac8a5ecbccccbfdc225982def0c6e8c7ce2918f984e91429bb2a6"} Jan 23 09:42:37 crc kubenswrapper[4982]: I0123 09:42:37.684607 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" event={"ID":"f469aaaa-fc2d-4b94-8686-754943207b00","Type":"ContainerStarted","Data":"7c8ee6a3d9273ac157e2da10b9cb4e2f59f8dbb567a252b39e65cde1d293d156"} Jan 23 09:42:37 crc kubenswrapper[4982]: I0123 09:42:37.845966 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6319f407-c75a-4ff2-bbc2-5bbd285e941a" path="/var/lib/kubelet/pods/6319f407-c75a-4ff2-bbc2-5bbd285e941a/volumes" Jan 23 09:42:38 crc kubenswrapper[4982]: I0123 09:42:38.048057 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-707c-account-create-update-xm4pw"] Jan 23 09:42:38 crc kubenswrapper[4982]: I0123 09:42:38.075907 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-707c-account-create-update-xm4pw"] Jan 23 09:42:39 crc kubenswrapper[4982]: I0123 09:42:39.847094 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a94468e3-2ef9-4eaf-bd81-cca4b468ae38" path="/var/lib/kubelet/pods/a94468e3-2ef9-4eaf-bd81-cca4b468ae38/volumes" Jan 23 09:42:40 crc kubenswrapper[4982]: I0123 09:42:40.749335 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" event={"ID":"b55ae8cf-edc4-4236-999e-e9db7006318f","Type":"ContainerStarted","Data":"00efc8220e5b42cda8179e8b49a7cf80951d502b3e87a15b21bc0a37d5b783ae"} Jan 23 09:42:40 crc kubenswrapper[4982]: I0123 09:42:40.774121 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7" podStartSLOduration=3.725977136 podStartE2EDuration="5.774102544s" podCreationTimestamp="2026-01-23 09:42:35 +0000 UTC" firstStartedPulling="2026-01-23 09:42:36.509957972 +0000 UTC m=+6430.990321358" lastFinishedPulling="2026-01-23 09:42:38.55808338 +0000 UTC m=+6433.038446766" observedRunningTime="2026-01-23 09:42:40.764743406 +0000 UTC m=+6435.245106812" watchObservedRunningTime="2026-01-23 09:42:40.774102544 +0000 UTC m=+6435.254465930" Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.435673 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.435738 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.435787 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.436665 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.436714 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" gracePeriod=600 Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.765640 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" exitCode=0 Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.766531 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29"} Jan 23 09:42:41 crc kubenswrapper[4982]: I0123 09:42:41.766568 4982 scope.go:117] "RemoveContainer" containerID="ed33b1a30507fbf99e82a0af06fe01d0b4796979bfbb452c4c776529e48b651d" Jan 23 09:42:44 crc kubenswrapper[4982]: E0123 09:42:44.788668 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:42:44 crc kubenswrapper[4982]: I0123 09:42:44.806970 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:42:44 crc kubenswrapper[4982]: E0123 09:42:44.807303 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.819846 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-8snlk" event={"ID":"5abb74fc-c030-4b49-a655-8ef50e5a4a6a","Type":"ContainerStarted","Data":"a3a0166b284f50190377b4bd322ca307be251269f00650cc7f39077525b85af8"} Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.820492 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.823398 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" event={"ID":"bf92fd73-e352-4046-8990-ada508e159a0","Type":"ContainerStarted","Data":"878fd07f42f83ecb50e7852e5a9ed8c4a8dbc46cc7c95044e497b9552f0342d7"} Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.823615 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.843891 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-8snlk" podStartSLOduration=2.782904351 podStartE2EDuration="10.843869979s" podCreationTimestamp="2026-01-23 09:42:35 +0000 UTC" firstStartedPulling="2026-01-23 09:42:36.843346656 +0000 UTC m=+6431.323710042" lastFinishedPulling="2026-01-23 09:42:44.904312284 +0000 UTC m=+6439.384675670" observedRunningTime="2026-01-23 09:42:45.840773864 +0000 UTC m=+6440.321137280" watchObservedRunningTime="2026-01-23 09:42:45.843869979 +0000 UTC m=+6440.324233375" Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.858892 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" event={"ID":"e540fff1-acc7-4d53-8691-95f7e6ac66b2","Type":"ContainerStarted","Data":"0a1738d3c3bf005b0625b93b0fb94aeec29aaa04a4a3c317c60633df4e0fca42"} Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.858988 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.887923 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f79d544bd-xcv49" podStartSLOduration=7.775908159 podStartE2EDuration="10.88790443s" podCreationTimestamp="2026-01-23 09:42:35 +0000 UTC" firstStartedPulling="2026-01-23 09:42:36.622026243 +0000 UTC m=+6431.102389629" lastFinishedPulling="2026-01-23 09:42:39.734022514 +0000 UTC m=+6434.214385900" observedRunningTime="2026-01-23 09:42:45.882198673 +0000 UTC m=+6440.362562059" watchObservedRunningTime="2026-01-23 09:42:45.88790443 +0000 UTC m=+6440.368267826" Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.943084 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-dqwbd" podStartSLOduration=2.745758211 podStartE2EDuration="10.943066826s" podCreationTimestamp="2026-01-23 09:42:35 +0000 UTC" firstStartedPulling="2026-01-23 09:42:36.707751249 +0000 UTC m=+6431.188114635" lastFinishedPulling="2026-01-23 09:42:44.905059864 +0000 UTC m=+6439.385423250" observedRunningTime="2026-01-23 09:42:45.929153693 +0000 UTC m=+6440.409517099" watchObservedRunningTime="2026-01-23 09:42:45.943066826 +0000 UTC m=+6440.423430212" Jan 23 09:42:45 crc kubenswrapper[4982]: I0123 09:42:45.953696 4982 scope.go:117] "RemoveContainer" containerID="6e584a8bb1a63e2a0fd48284c29842f5e3953c3be66c965359ecc655c32f9d3a" Jan 23 09:42:46 crc kubenswrapper[4982]: I0123 09:42:46.085785 4982 scope.go:117] "RemoveContainer" containerID="8e9b8a9c19ea324df867283f49f79f7dd97c3b8dc741f123409b2afef0bd82b8" Jan 23 09:42:47 crc kubenswrapper[4982]: I0123 09:42:47.861378 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" event={"ID":"f469aaaa-fc2d-4b94-8686-754943207b00","Type":"ContainerStarted","Data":"31d18f64ffe3c6f332c57e82d51b482ff405461bb8610ccde30011fa18cd5a64"} Jan 23 09:42:47 crc kubenswrapper[4982]: I0123 09:42:47.884664 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-q4hk4" podStartSLOduration=4.66691215 podStartE2EDuration="12.884646956s" podCreationTimestamp="2026-01-23 09:42:35 +0000 UTC" firstStartedPulling="2026-01-23 09:42:36.685801236 +0000 UTC m=+6431.166164622" lastFinishedPulling="2026-01-23 09:42:44.903536042 +0000 UTC m=+6439.383899428" observedRunningTime="2026-01-23 09:42:47.879011161 +0000 UTC m=+6442.359374547" watchObservedRunningTime="2026-01-23 09:42:47.884646956 +0000 UTC m=+6442.365010342" Jan 23 09:42:48 crc kubenswrapper[4982]: I0123 09:42:48.032437 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cb8gf"] Jan 23 09:42:48 crc kubenswrapper[4982]: I0123 09:42:48.047390 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cb8gf"] Jan 23 09:42:49 crc kubenswrapper[4982]: I0123 09:42:49.842451 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402f0d81-2b8f-443d-bc52-d9e333a31306" path="/var/lib/kubelet/pods/402f0d81-2b8f-443d-bc52-d9e333a31306/volumes" Jan 23 09:42:55 crc kubenswrapper[4982]: I0123 09:42:55.827550 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:42:55 crc kubenswrapper[4982]: E0123 09:42:55.828256 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:42:56 crc kubenswrapper[4982]: I0123 09:42:56.012928 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-8snlk" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.123690 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.124604 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" containerName="openstackclient" containerID="cri-o://ed38a1ca8c1e5669c3259b885b55f496e0dbf5930c821ad04c149000882a974f" gracePeriod=2 Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.158422 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.336687 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 09:42:58 crc kubenswrapper[4982]: E0123 09:42:58.337287 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" containerName="openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.337307 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" containerName="openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.337580 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" containerName="openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.338587 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.343122 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" podUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.385885 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.488124 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvvx\" (UniqueName: \"kubernetes.io/projected/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-kube-api-access-xxvvx\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.488213 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config-secret\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.488246 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.488323 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.592112 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxvvx\" (UniqueName: \"kubernetes.io/projected/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-kube-api-access-xxvvx\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.592178 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config-secret\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.592210 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.592264 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.593681 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.792492 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxvvx\" (UniqueName: \"kubernetes.io/projected/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-kube-api-access-xxvvx\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.793336 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.794891 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config-secret\") pod \"openstackclient\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " pod="openstack/openstackclient" Jan 23 09:42:58 crc kubenswrapper[4982]: I0123 09:42:58.972816 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:42:59 crc kubenswrapper[4982]: I0123 09:42:59.999686 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:43:00 crc kubenswrapper[4982]: I0123 09:43:00.003105 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:43:00 crc kubenswrapper[4982]: I0123 09:43:00.010248 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gxwwz" Jan 23 09:43:00 crc kubenswrapper[4982]: I0123 09:43:00.029411 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:43:00 crc kubenswrapper[4982]: I0123 09:43:00.134379 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgs7t\" (UniqueName: \"kubernetes.io/projected/18a97dbf-5734-4e1a-a8d6-685b06329521-kube-api-access-mgs7t\") pod \"kube-state-metrics-0\" (UID: \"18a97dbf-5734-4e1a-a8d6-685b06329521\") " pod="openstack/kube-state-metrics-0" Jan 23 09:43:00 crc kubenswrapper[4982]: I0123 09:43:00.236489 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgs7t\" (UniqueName: \"kubernetes.io/projected/18a97dbf-5734-4e1a-a8d6-685b06329521-kube-api-access-mgs7t\") pod \"kube-state-metrics-0\" (UID: \"18a97dbf-5734-4e1a-a8d6-685b06329521\") " pod="openstack/kube-state-metrics-0" Jan 23 09:43:00 crc kubenswrapper[4982]: I0123 09:43:00.258369 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgs7t\" (UniqueName: \"kubernetes.io/projected/18a97dbf-5734-4e1a-a8d6-685b06329521-kube-api-access-mgs7t\") pod \"kube-state-metrics-0\" (UID: \"18a97dbf-5734-4e1a-a8d6-685b06329521\") " pod="openstack/kube-state-metrics-0" Jan 23 09:43:00 crc kubenswrapper[4982]: I0123 09:43:00.401889 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:43:01 crc kubenswrapper[4982]: I0123 09:43:01.058407 4982 generic.go:334] "Generic (PLEG): container finished" podID="dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" containerID="ed38a1ca8c1e5669c3259b885b55f496e0dbf5930c821ad04c149000882a974f" exitCode=137 Jan 23 09:43:02 crc kubenswrapper[4982]: I0123 09:43:02.976774 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 23 09:43:02 crc kubenswrapper[4982]: I0123 09:43:02.980945 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:02 crc kubenswrapper[4982]: I0123 09:43:02.995683 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Jan 23 09:43:02 crc kubenswrapper[4982]: I0123 09:43:02.996029 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Jan 23 09:43:02 crc kubenswrapper[4982]: I0123 09:43:02.996162 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Jan 23 09:43:02 crc kubenswrapper[4982]: I0123 09:43:02.996285 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Jan 23 09:43:02 crc kubenswrapper[4982]: I0123 09:43:02.996853 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-97hz5" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.004516 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.051762 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.089077 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"18a97dbf-5734-4e1a-a8d6-685b06329521","Type":"ContainerStarted","Data":"0506b5bd940ab33e43a7425716e4f0e441cc28d2035083da2b6cf9638ddcdbb0"} Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.112460 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/085e24fa-319d-49ec-bad4-4f4b515e795a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.112566 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.112678 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rq8r\" (UniqueName: \"kubernetes.io/projected/085e24fa-319d-49ec-bad4-4f4b515e795a-kube-api-access-2rq8r\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.112706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/085e24fa-319d-49ec-bad4-4f4b515e795a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.112740 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.112772 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.112834 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/085e24fa-319d-49ec-bad4-4f4b515e795a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.217005 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rq8r\" (UniqueName: \"kubernetes.io/projected/085e24fa-319d-49ec-bad4-4f4b515e795a-kube-api-access-2rq8r\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.217076 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/085e24fa-319d-49ec-bad4-4f4b515e795a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.217113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.217150 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.217207 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/085e24fa-319d-49ec-bad4-4f4b515e795a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.217305 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/085e24fa-319d-49ec-bad4-4f4b515e795a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.217384 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.224077 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/085e24fa-319d-49ec-bad4-4f4b515e795a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.226022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.227434 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.229170 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/085e24fa-319d-49ec-bad4-4f4b515e795a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.229396 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/085e24fa-319d-49ec-bad4-4f4b515e795a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.229995 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/085e24fa-319d-49ec-bad4-4f4b515e795a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.274381 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rq8r\" (UniqueName: \"kubernetes.io/projected/085e24fa-319d-49ec-bad4-4f4b515e795a-kube-api-access-2rq8r\") pod \"alertmanager-metric-storage-0\" (UID: \"085e24fa-319d-49ec-bad4-4f4b515e795a\") " pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.317407 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.326856 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.653077 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.655204 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.658219 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.663049 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.663141 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.663192 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.663281 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.663373 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xzqvd" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.663390 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.663283 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.670785 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.680429 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.751297 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fkz\" (UniqueName: \"kubernetes.io/projected/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-kube-api-access-57fkz\") pod \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.751513 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config\") pod \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.751543 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config-secret\") pod \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.753791 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-combined-ca-bundle\") pod \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\" (UID: \"dc154ab8-e984-4ac6-aefd-4c5ecb03faa1\") " Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.754288 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755487 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755533 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1529b8b8-154e-4012-a6b2-2b571f7a0243-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755685 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755706 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755728 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-config\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755841 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755895 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.755933 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.756049 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrqq\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-kube-api-access-ncrqq\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.763177 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-kube-api-access-57fkz" (OuterVolumeSpecName: "kube-api-access-57fkz") pod "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" (UID: "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1"). InnerVolumeSpecName "kube-api-access-57fkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.788409 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" (UID: "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.831883 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" (UID: "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.834942 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" (UID: "dc154ab8-e984-4ac6-aefd-4c5ecb03faa1"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.863035 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc154ab8-e984-4ac6-aefd-4c5ecb03faa1" path="/var/lib/kubelet/pods/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1/volumes" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.865969 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866066 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866097 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1529b8b8-154e-4012-a6b2-2b571f7a0243-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866180 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866197 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866219 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-config\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866251 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866287 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866314 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866386 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrqq\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-kube-api-access-ncrqq\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866450 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866463 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866473 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.866481 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57fkz\" (UniqueName: \"kubernetes.io/projected/dc154ab8-e984-4ac6-aefd-4c5ecb03faa1-kube-api-access-57fkz\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.869807 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.870260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.874078 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.880376 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-config\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.886195 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.886234 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b24e499d83710e00677913f35f6230bdb5cce6e5b4f46f88180e75fde4fe4abf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.892189 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1529b8b8-154e-4012-a6b2-2b571f7a0243-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.892378 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.893935 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.894022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.895247 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrqq\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-kube-api-access-ncrqq\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:03 crc kubenswrapper[4982]: I0123 09:43:03.948162 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.004891 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.046509 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.101676 4982 scope.go:117] "RemoveContainer" containerID="ed38a1ca8c1e5669c3259b885b55f496e0dbf5930c821ad04c149000882a974f" Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.101816 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.105118 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"085e24fa-319d-49ec-bad4-4f4b515e795a","Type":"ContainerStarted","Data":"b9ecd773e2e8867cef09977bdfe1df0fbb876b560e2980e3e71a25346d00b073"} Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.106667 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"18a97dbf-5734-4e1a-a8d6-685b06329521","Type":"ContainerStarted","Data":"259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9"} Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.107519 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.112700 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde","Type":"ContainerStarted","Data":"2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3"} Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.113020 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde","Type":"ContainerStarted","Data":"a760d9efc0571e719e537feb8b7b4fec918f74f38868a296c7379f4e73463d13"} Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.149595 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.567320154 podStartE2EDuration="5.149570638s" podCreationTimestamp="2026-01-23 09:42:59 +0000 UTC" firstStartedPulling="2026-01-23 09:43:03.069704596 +0000 UTC m=+6457.550067982" lastFinishedPulling="2026-01-23 09:43:03.65195508 +0000 UTC m=+6458.132318466" observedRunningTime="2026-01-23 09:43:04.13871044 +0000 UTC m=+6458.619073826" watchObservedRunningTime="2026-01-23 09:43:04.149570638 +0000 UTC m=+6458.629934024" Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.163929 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=6.163907263 podStartE2EDuration="6.163907263s" podCreationTimestamp="2026-01-23 09:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:43:04.155203793 +0000 UTC m=+6458.635567199" watchObservedRunningTime="2026-01-23 09:43:04.163907263 +0000 UTC m=+6458.644270649" Jan 23 09:43:04 crc kubenswrapper[4982]: I0123 09:43:04.541589 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:04 crc kubenswrapper[4982]: W0123 09:43:04.546703 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1529b8b8_154e_4012_a6b2_2b571f7a0243.slice/crio-ba5d6af8df0ac7fc2bfb2204f26a814ce1402bad4eeb82f6a14261007c829091 WatchSource:0}: Error finding container ba5d6af8df0ac7fc2bfb2204f26a814ce1402bad4eeb82f6a14261007c829091: Status 404 returned error can't find the container with id ba5d6af8df0ac7fc2bfb2204f26a814ce1402bad4eeb82f6a14261007c829091 Jan 23 09:43:05 crc kubenswrapper[4982]: I0123 09:43:05.127341 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerStarted","Data":"ba5d6af8df0ac7fc2bfb2204f26a814ce1402bad4eeb82f6a14261007c829091"} Jan 23 09:43:06 crc kubenswrapper[4982]: I0123 09:43:06.827956 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:43:06 crc kubenswrapper[4982]: E0123 09:43:06.829007 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:43:10 crc kubenswrapper[4982]: I0123 09:43:10.205738 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"085e24fa-319d-49ec-bad4-4f4b515e795a","Type":"ContainerStarted","Data":"6b8db06d106a08057cba59dfc1cab7c3776e3ff7bfd12391a868f520e72a9236"} Jan 23 09:43:10 crc kubenswrapper[4982]: I0123 09:43:10.211531 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerStarted","Data":"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929"} Jan 23 09:43:10 crc kubenswrapper[4982]: I0123 09:43:10.406155 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 09:43:17 crc kubenswrapper[4982]: I0123 09:43:17.283459 4982 generic.go:334] "Generic (PLEG): container finished" podID="085e24fa-319d-49ec-bad4-4f4b515e795a" containerID="6b8db06d106a08057cba59dfc1cab7c3776e3ff7bfd12391a868f520e72a9236" exitCode=0 Jan 23 09:43:17 crc kubenswrapper[4982]: I0123 09:43:17.283526 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"085e24fa-319d-49ec-bad4-4f4b515e795a","Type":"ContainerDied","Data":"6b8db06d106a08057cba59dfc1cab7c3776e3ff7bfd12391a868f520e72a9236"} Jan 23 09:43:17 crc kubenswrapper[4982]: I0123 09:43:17.286089 4982 generic.go:334] "Generic (PLEG): container finished" podID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerID="43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929" exitCode=0 Jan 23 09:43:17 crc kubenswrapper[4982]: I0123 09:43:17.286145 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerDied","Data":"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929"} Jan 23 09:43:18 crc kubenswrapper[4982]: I0123 09:43:18.031820 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-l6l7x"] Jan 23 09:43:18 crc kubenswrapper[4982]: I0123 09:43:18.045128 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-l6l7x"] Jan 23 09:43:18 crc kubenswrapper[4982]: I0123 09:43:18.086965 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-471c-account-create-update-jqct2"] Jan 23 09:43:18 crc kubenswrapper[4982]: I0123 09:43:18.098084 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-471c-account-create-update-jqct2"] Jan 23 09:43:19 crc kubenswrapper[4982]: I0123 09:43:19.827879 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:43:19 crc kubenswrapper[4982]: E0123 09:43:19.828725 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:43:19 crc kubenswrapper[4982]: I0123 09:43:19.849242 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a09655-a300-46ad-8564-09230e4ea5b9" path="/var/lib/kubelet/pods/60a09655-a300-46ad-8564-09230e4ea5b9/volumes" Jan 23 09:43:19 crc kubenswrapper[4982]: I0123 09:43:19.850079 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2c64bd-230e-4c7c-b065-03ae7161d5b1" path="/var/lib/kubelet/pods/fc2c64bd-230e-4c7c-b065-03ae7161d5b1/volumes" Jan 23 09:43:20 crc kubenswrapper[4982]: I0123 09:43:20.325788 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"085e24fa-319d-49ec-bad4-4f4b515e795a","Type":"ContainerStarted","Data":"22f8723350dc9d97ba11595a5954549f234c21c1ca0a6cf748c6c1cdb53ab97b"} Jan 23 09:43:24 crc kubenswrapper[4982]: I0123 09:43:24.379864 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"085e24fa-319d-49ec-bad4-4f4b515e795a","Type":"ContainerStarted","Data":"05346567f47986d5baed9893d2cb835bfc9bb0eaf912e0f8e7c5c91887c1d6e9"} Jan 23 09:43:24 crc kubenswrapper[4982]: I0123 09:43:24.382070 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:24 crc kubenswrapper[4982]: I0123 09:43:24.383595 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Jan 23 09:43:24 crc kubenswrapper[4982]: I0123 09:43:24.417730 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.352309939 podStartE2EDuration="22.417709402s" podCreationTimestamp="2026-01-23 09:43:02 +0000 UTC" firstStartedPulling="2026-01-23 09:43:04.048180851 +0000 UTC m=+6458.528544247" lastFinishedPulling="2026-01-23 09:43:19.113580324 +0000 UTC m=+6473.593943710" observedRunningTime="2026-01-23 09:43:24.408708845 +0000 UTC m=+6478.889072251" watchObservedRunningTime="2026-01-23 09:43:24.417709402 +0000 UTC m=+6478.898072798" Jan 23 09:43:25 crc kubenswrapper[4982]: I0123 09:43:25.033443 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-rtc2n"] Jan 23 09:43:25 crc kubenswrapper[4982]: I0123 09:43:25.048824 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-rtc2n"] Jan 23 09:43:25 crc kubenswrapper[4982]: I0123 09:43:25.391781 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerStarted","Data":"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761"} Jan 23 09:43:25 crc kubenswrapper[4982]: I0123 09:43:25.839545 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be5403b-2787-4030-905d-26349736b5b4" path="/var/lib/kubelet/pods/1be5403b-2787-4030-905d-26349736b5b4/volumes" Jan 23 09:43:29 crc kubenswrapper[4982]: I0123 09:43:29.456535 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerStarted","Data":"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585"} Jan 23 09:43:32 crc kubenswrapper[4982]: I0123 09:43:32.488609 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerStarted","Data":"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0"} Jan 23 09:43:32 crc kubenswrapper[4982]: I0123 09:43:32.515520 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.117705799 podStartE2EDuration="30.515500589s" podCreationTimestamp="2026-01-23 09:43:02 +0000 UTC" firstStartedPulling="2026-01-23 09:43:04.552487824 +0000 UTC m=+6459.032851210" lastFinishedPulling="2026-01-23 09:43:31.950282614 +0000 UTC m=+6486.430646000" observedRunningTime="2026-01-23 09:43:32.511418177 +0000 UTC m=+6486.991781573" watchObservedRunningTime="2026-01-23 09:43:32.515500589 +0000 UTC m=+6486.995863975" Jan 23 09:43:32 crc kubenswrapper[4982]: I0123 09:43:32.828158 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:43:32 crc kubenswrapper[4982]: E0123 09:43:32.828969 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:43:34 crc kubenswrapper[4982]: I0123 09:43:34.006667 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:34 crc kubenswrapper[4982]: I0123 09:43:34.007113 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:34 crc kubenswrapper[4982]: I0123 09:43:34.010640 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:34 crc kubenswrapper[4982]: I0123 09:43:34.510201 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.872265 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.872906 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" containerName="openstackclient" containerID="cri-o://2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3" gracePeriod=2 Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.881126 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" podUID="0521df85-12eb-406a-a7db-b876ddc88d26" Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.889888 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.899681 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 09:43:35 crc kubenswrapper[4982]: E0123 09:43:35.900119 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" containerName="openstackclient" Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.900136 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" containerName="openstackclient" Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.900384 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" containerName="openstackclient" Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.901153 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:43:35 crc kubenswrapper[4982]: I0123 09:43:35.911871 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.014481 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d94k\" (UniqueName: \"kubernetes.io/projected/0521df85-12eb-406a-a7db-b876ddc88d26-kube-api-access-4d94k\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.014546 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0521df85-12eb-406a-a7db-b876ddc88d26-openstack-config\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.014611 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0521df85-12eb-406a-a7db-b876ddc88d26-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.014673 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0521df85-12eb-406a-a7db-b876ddc88d26-openstack-config-secret\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.116528 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d94k\" (UniqueName: \"kubernetes.io/projected/0521df85-12eb-406a-a7db-b876ddc88d26-kube-api-access-4d94k\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.116583 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0521df85-12eb-406a-a7db-b876ddc88d26-openstack-config\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.116670 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0521df85-12eb-406a-a7db-b876ddc88d26-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.116721 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0521df85-12eb-406a-a7db-b876ddc88d26-openstack-config-secret\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.117788 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0521df85-12eb-406a-a7db-b876ddc88d26-openstack-config\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.129410 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0521df85-12eb-406a-a7db-b876ddc88d26-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.135858 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0521df85-12eb-406a-a7db-b876ddc88d26-openstack-config-secret\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.146198 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d94k\" (UniqueName: \"kubernetes.io/projected/0521df85-12eb-406a-a7db-b876ddc88d26-kube-api-access-4d94k\") pod \"openstackclient\" (UID: \"0521df85-12eb-406a-a7db-b876ddc88d26\") " pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.222754 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.780096 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 09:43:36 crc kubenswrapper[4982]: W0123 09:43:36.780897 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0521df85_12eb_406a_a7db_b876ddc88d26.slice/crio-da679f708edbe73086fb67c741723789a820dc7f5bec26374269b46074af72a4 WatchSource:0}: Error finding container da679f708edbe73086fb67c741723789a820dc7f5bec26374269b46074af72a4: Status 404 returned error can't find the container with id da679f708edbe73086fb67c741723789a820dc7f5bec26374269b46074af72a4 Jan 23 09:43:36 crc kubenswrapper[4982]: I0123 09:43:36.831751 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:37 crc kubenswrapper[4982]: I0123 09:43:37.550929 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="prometheus" containerID="cri-o://8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761" gracePeriod=600 Jan 23 09:43:37 crc kubenswrapper[4982]: I0123 09:43:37.551271 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0521df85-12eb-406a-a7db-b876ddc88d26","Type":"ContainerStarted","Data":"48c446efb8e0589ed159a5c631c466d91648f35d23d926e39c873e7b2b455d77"} Jan 23 09:43:37 crc kubenswrapper[4982]: I0123 09:43:37.551231 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="thanos-sidecar" containerID="cri-o://ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0" gracePeriod=600 Jan 23 09:43:37 crc kubenswrapper[4982]: I0123 09:43:37.551237 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="config-reloader" containerID="cri-o://d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585" gracePeriod=600 Jan 23 09:43:37 crc kubenswrapper[4982]: I0123 09:43:37.551310 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0521df85-12eb-406a-a7db-b876ddc88d26","Type":"ContainerStarted","Data":"da679f708edbe73086fb67c741723789a820dc7f5bec26374269b46074af72a4"} Jan 23 09:43:37 crc kubenswrapper[4982]: I0123 09:43:37.574901 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.5748813999999998 podStartE2EDuration="2.5748814s" podCreationTimestamp="2026-01-23 09:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:43:37.571712743 +0000 UTC m=+6492.052076139" watchObservedRunningTime="2026-01-23 09:43:37.5748814 +0000 UTC m=+6492.055244786" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.217486 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.298025 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxvvx\" (UniqueName: \"kubernetes.io/projected/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-kube-api-access-xxvvx\") pod \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.298183 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config\") pod \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.298272 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-combined-ca-bundle\") pod \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.298492 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config-secret\") pod \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\" (UID: \"e05cc5d3-303d-4e67-a8a4-0e2dd9555dde\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.335449 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-kube-api-access-xxvvx" (OuterVolumeSpecName: "kube-api-access-xxvvx") pod "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" (UID: "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde"). InnerVolumeSpecName "kube-api-access-xxvvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.348947 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" (UID: "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.347611 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" (UID: "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.406657 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxvvx\" (UniqueName: \"kubernetes.io/projected/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-kube-api-access-xxvvx\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.406698 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.406749 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.441420 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" (UID: "e05cc5d3-303d-4e67-a8a4-0e2dd9555dde"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.509771 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.560049 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.561759 4982 generic.go:334] "Generic (PLEG): container finished" podID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" containerID="2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3" exitCode=137 Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.561830 4982 scope.go:117] "RemoveContainer" containerID="2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.562376 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.576345 4982 generic.go:334] "Generic (PLEG): container finished" podID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerID="ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0" exitCode=0 Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.577159 4982 generic.go:334] "Generic (PLEG): container finished" podID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerID="d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585" exitCode=0 Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.577177 4982 generic.go:334] "Generic (PLEG): container finished" podID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerID="8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761" exitCode=0 Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.576424 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.576438 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerDied","Data":"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0"} Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.577355 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerDied","Data":"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585"} Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.577379 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerDied","Data":"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761"} Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.577396 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1529b8b8-154e-4012-a6b2-2b571f7a0243","Type":"ContainerDied","Data":"ba5d6af8df0ac7fc2bfb2204f26a814ce1402bad4eeb82f6a14261007c829091"} Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.595504 4982 scope.go:117] "RemoveContainer" containerID="2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3" Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.599322 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3\": container with ID starting with 2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3 not found: ID does not exist" containerID="2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.599370 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3"} err="failed to get container status \"2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3\": rpc error: code = NotFound desc = could not find container \"2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3\": container with ID starting with 2577c444b43ec522a2475d271bc49eb24d535bbb6538dd446cc960e86b87b6c3 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.599441 4982 scope.go:117] "RemoveContainer" containerID="ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.624822 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-web-config\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.624979 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-1\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625019 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-2\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625182 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625239 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-tls-assets\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625291 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-config\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625323 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1529b8b8-154e-4012-a6b2-2b571f7a0243-config-out\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625439 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-thanos-prometheus-http-client-file\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625496 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncrqq\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-kube-api-access-ncrqq\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.625525 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-0\") pod \"1529b8b8-154e-4012-a6b2-2b571f7a0243\" (UID: \"1529b8b8-154e-4012-a6b2-2b571f7a0243\") " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.635608 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.636111 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1529b8b8-154e-4012-a6b2-2b571f7a0243-config-out" (OuterVolumeSpecName: "config-out") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.636499 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.637159 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.639270 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.640958 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-config" (OuterVolumeSpecName: "config") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.641310 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-kube-api-access-ncrqq" (OuterVolumeSpecName: "kube-api-access-ncrqq") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "kube-api-access-ncrqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.641370 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.645450 4982 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" podUID="0521df85-12eb-406a-a7db-b876ddc88d26" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.668749 4982 scope.go:117] "RemoveContainer" containerID="d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.691190 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.694286 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-web-config" (OuterVolumeSpecName: "web-config") pod "1529b8b8-154e-4012-a6b2-2b571f7a0243" (UID: "1529b8b8-154e-4012-a6b2-2b571f7a0243"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.703467 4982 scope.go:117] "RemoveContainer" containerID="8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730556 4982 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730591 4982 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730618 4982 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") on node \"crc\" " Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730645 4982 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730656 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730664 4982 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1529b8b8-154e-4012-a6b2-2b571f7a0243-config-out\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730675 4982 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730687 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncrqq\" (UniqueName: \"kubernetes.io/projected/1529b8b8-154e-4012-a6b2-2b571f7a0243-kube-api-access-ncrqq\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730697 4982 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1529b8b8-154e-4012-a6b2-2b571f7a0243-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.730705 4982 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1529b8b8-154e-4012-a6b2-2b571f7a0243-web-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.733302 4982 scope.go:117] "RemoveContainer" containerID="43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.756511 4982 scope.go:117] "RemoveContainer" containerID="ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.763399 4982 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.763800 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0\": container with ID starting with ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0 not found: ID does not exist" containerID="ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.763856 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0"} err="failed to get container status \"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0\": rpc error: code = NotFound desc = could not find container \"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0\": container with ID starting with ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.763889 4982 scope.go:117] "RemoveContainer" containerID="d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585" Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.764325 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585\": container with ID starting with d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585 not found: ID does not exist" containerID="d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.764361 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585"} err="failed to get container status \"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585\": rpc error: code = NotFound desc = could not find container \"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585\": container with ID starting with d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.764384 4982 scope.go:117] "RemoveContainer" containerID="8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761" Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.764734 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761\": container with ID starting with 8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761 not found: ID does not exist" containerID="8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.764757 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761"} err="failed to get container status \"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761\": rpc error: code = NotFound desc = could not find container \"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761\": container with ID starting with 8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.764770 4982 scope.go:117] "RemoveContainer" containerID="43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929" Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.765794 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929\": container with ID starting with 43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929 not found: ID does not exist" containerID="43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.765815 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929"} err="failed to get container status \"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929\": rpc error: code = NotFound desc = could not find container \"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929\": container with ID starting with 43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.765829 4982 scope.go:117] "RemoveContainer" containerID="ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.766317 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0"} err="failed to get container status \"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0\": rpc error: code = NotFound desc = could not find container \"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0\": container with ID starting with ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.766349 4982 scope.go:117] "RemoveContainer" containerID="d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.766449 4982 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b") on node "crc" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.774211 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585"} err="failed to get container status \"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585\": rpc error: code = NotFound desc = could not find container \"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585\": container with ID starting with d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.774240 4982 scope.go:117] "RemoveContainer" containerID="8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.774584 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761"} err="failed to get container status \"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761\": rpc error: code = NotFound desc = could not find container \"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761\": container with ID starting with 8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.774653 4982 scope.go:117] "RemoveContainer" containerID="43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.774913 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929"} err="failed to get container status \"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929\": rpc error: code = NotFound desc = could not find container \"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929\": container with ID starting with 43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.774930 4982 scope.go:117] "RemoveContainer" containerID="ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.775212 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0"} err="failed to get container status \"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0\": rpc error: code = NotFound desc = could not find container \"ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0\": container with ID starting with ac49d0566148cb5b6530bb03e7fe170bca8ed372d580681bf28ddcdcd56260b0 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.775278 4982 scope.go:117] "RemoveContainer" containerID="d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.777329 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585"} err="failed to get container status \"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585\": rpc error: code = NotFound desc = could not find container \"d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585\": container with ID starting with d771b4855f10fb8fb74c22fbcc129df7f1f8bdc982b08e402f68b5f12382b585 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.777365 4982 scope.go:117] "RemoveContainer" containerID="8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.778466 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761"} err="failed to get container status \"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761\": rpc error: code = NotFound desc = could not find container \"8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761\": container with ID starting with 8445c1a63d9aa4ea24ade9ea18090a6fbf93f64a29990c97f98d076ccce07761 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.778509 4982 scope.go:117] "RemoveContainer" containerID="43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.778765 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929"} err="failed to get container status \"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929\": rpc error: code = NotFound desc = could not find container \"43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929\": container with ID starting with 43631f64829a1f98f96af560f3c5f1f5a53edae9f12e398547c0b21df70db929 not found: ID does not exist" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.831817 4982 reconciler_common.go:293] "Volume detached for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.917293 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.927491 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.944397 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.944976 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="prometheus" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.944997 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="prometheus" Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.945019 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="init-config-reloader" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.945027 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="init-config-reloader" Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.945062 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="thanos-sidecar" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.945071 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="thanos-sidecar" Jan 23 09:43:38 crc kubenswrapper[4982]: E0123 09:43:38.945088 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="config-reloader" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.945096 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="config-reloader" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.945429 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="config-reloader" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.945452 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="prometheus" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.945479 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" containerName="thanos-sidecar" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.948230 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.950828 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.951984 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.956299 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.967181 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.987076 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.989541 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.989749 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.989881 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xzqvd" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.990767 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.991108 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.991213 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.991321 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.991441 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 23 09:43:38 crc kubenswrapper[4982]: I0123 09:43:38.996997 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.017605 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.035966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036022 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036048 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036069 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036106 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-log-httpd\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036140 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036155 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwmvg\" (UniqueName: \"kubernetes.io/projected/06c55dbd-0f43-4959-9035-a1d07894d0bb-kube-api-access-vwmvg\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036184 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7c9\" (UniqueName: \"kubernetes.io/projected/114a886f-cd3c-460e-93a2-9f459ea999d7-kube-api-access-dp7c9\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036203 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036224 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036247 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036298 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036318 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-config-data\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036337 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-run-httpd\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036372 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c55dbd-0f43-4959-9035-a1d07894d0bb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036391 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036420 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-scripts\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036441 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036474 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c55dbd-0f43-4959-9035-a1d07894d0bb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.036490 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.138895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-scripts\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139303 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139372 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c55dbd-0f43-4959-9035-a1d07894d0bb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139395 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139432 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139464 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139492 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139519 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139569 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-log-httpd\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139619 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139662 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwmvg\" (UniqueName: \"kubernetes.io/projected/06c55dbd-0f43-4959-9035-a1d07894d0bb-kube-api-access-vwmvg\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139709 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7c9\" (UniqueName: \"kubernetes.io/projected/114a886f-cd3c-460e-93a2-9f459ea999d7-kube-api-access-dp7c9\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139736 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139762 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139789 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139867 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139895 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-config-data\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139922 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-run-httpd\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.139975 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c55dbd-0f43-4959-9035-a1d07894d0bb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.140006 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.140322 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.140532 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.140719 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/06c55dbd-0f43-4959-9035-a1d07894d0bb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.140929 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-log-httpd\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.142281 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-run-httpd\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.143509 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.143287 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/06c55dbd-0f43-4959-9035-a1d07894d0bb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.144398 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-scripts\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.146162 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-config-data\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.146489 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.146582 4982 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.146605 4982 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b24e499d83710e00677913f35f6230bdb5cce6e5b4f46f88180e75fde4fe4abf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.148260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/06c55dbd-0f43-4959-9035-a1d07894d0bb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.150467 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.150598 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.152111 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.155598 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.162874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.165082 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwmvg\" (UniqueName: \"kubernetes.io/projected/06c55dbd-0f43-4959-9035-a1d07894d0bb-kube-api-access-vwmvg\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.172237 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7c9\" (UniqueName: \"kubernetes.io/projected/114a886f-cd3c-460e-93a2-9f459ea999d7-kube-api-access-dp7c9\") pod \"ceilometer-0\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.172269 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/06c55dbd-0f43-4959-9035-a1d07894d0bb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.217083 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d33b731-cec5-4055-95fd-c57bfaf1ff8b\") pod \"prometheus-metric-storage-0\" (UID: \"06c55dbd-0f43-4959-9035-a1d07894d0bb\") " pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.269475 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.308823 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.812840 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.843590 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1529b8b8-154e-4012-a6b2-2b571f7a0243" path="/var/lib/kubelet/pods/1529b8b8-154e-4012-a6b2-2b571f7a0243/volumes" Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.845780 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05cc5d3-303d-4e67-a8a4-0e2dd9555dde" path="/var/lib/kubelet/pods/e05cc5d3-303d-4e67-a8a4-0e2dd9555dde/volumes" Jan 23 09:43:39 crc kubenswrapper[4982]: W0123 09:43:39.878911 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06c55dbd_0f43_4959_9035_a1d07894d0bb.slice/crio-011996c658536dcea6337a50eff33ee22a524b2e250ed3f02e3d123853f76fb6 WatchSource:0}: Error finding container 011996c658536dcea6337a50eff33ee22a524b2e250ed3f02e3d123853f76fb6: Status 404 returned error can't find the container with id 011996c658536dcea6337a50eff33ee22a524b2e250ed3f02e3d123853f76fb6 Jan 23 09:43:39 crc kubenswrapper[4982]: I0123 09:43:39.890349 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 23 09:43:40 crc kubenswrapper[4982]: I0123 09:43:40.609102 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerStarted","Data":"26e320bc4bfa5f7786a4a5570dc699bf3e64152a98c4f4d4e6dce07fb11a76a5"} Jan 23 09:43:40 crc kubenswrapper[4982]: I0123 09:43:40.610789 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerStarted","Data":"011996c658536dcea6337a50eff33ee22a524b2e250ed3f02e3d123853f76fb6"} Jan 23 09:43:41 crc kubenswrapper[4982]: I0123 09:43:41.623593 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerStarted","Data":"f79f5572265681bd0ab28b09f6e1d054441ae64d68ea24d4fcc7e1c40bf989ef"} Jan 23 09:43:41 crc kubenswrapper[4982]: I0123 09:43:41.624143 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerStarted","Data":"b7f4a9fb18d808595ee0c6ce5168471ae3c7d3ee93dce2fac68bd5c1304b452c"} Jan 23 09:43:42 crc kubenswrapper[4982]: I0123 09:43:42.641425 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerStarted","Data":"da75e0219eb2f880f6a91082691fb9bdd4022e361ee506d33dfaa422dcd5c5e0"} Jan 23 09:43:43 crc kubenswrapper[4982]: I0123 09:43:43.827201 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:43:43 crc kubenswrapper[4982]: E0123 09:43:43.827902 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:43:44 crc kubenswrapper[4982]: I0123 09:43:44.660594 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerStarted","Data":"5db0aaf8614693088c86f19271dad8cadf245e6166f674cb9837a9b6d64b22cc"} Jan 23 09:43:44 crc kubenswrapper[4982]: I0123 09:43:44.664075 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerStarted","Data":"d99d486daf78654482d841633af77cf1bcfd36517f6d2e08e1da3d69c5581d4f"} Jan 23 09:43:44 crc kubenswrapper[4982]: I0123 09:43:44.664265 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:43:44 crc kubenswrapper[4982]: I0123 09:43:44.725556 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.640635491 podStartE2EDuration="6.725534175s" podCreationTimestamp="2026-01-23 09:43:38 +0000 UTC" firstStartedPulling="2026-01-23 09:43:39.82170916 +0000 UTC m=+6494.302072556" lastFinishedPulling="2026-01-23 09:43:43.906607854 +0000 UTC m=+6498.386971240" observedRunningTime="2026-01-23 09:43:44.714406539 +0000 UTC m=+6499.194769925" watchObservedRunningTime="2026-01-23 09:43:44.725534175 +0000 UTC m=+6499.205897561" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.408236 4982 scope.go:117] "RemoveContainer" containerID="081d7b45a828452d88231f20d4291093aba7ccc4bd971d2929240842528f7e11" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.431239 4982 scope.go:117] "RemoveContainer" containerID="8135f62e3522ac8c609f4b299b47a5b0614b5de6703c04242121010146101b8d" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.490723 4982 scope.go:117] "RemoveContainer" containerID="e3eb91263721654b3569a23212edc6c6425b1e9f11c1ba3b1dfaa78bb1ddb4b3" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.529900 4982 scope.go:117] "RemoveContainer" containerID="f7ce3d1bdf0bf6230ba8dcaa5f0be7062c9b5aa0c9ab499e68892f23f44bcff7" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.578833 4982 scope.go:117] "RemoveContainer" containerID="2f3bbb46511fb638b7c64856fc0e502397bb3767cc8bbd3f1a0cf17a9355bc91" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.620398 4982 scope.go:117] "RemoveContainer" containerID="5a719dda35d6b122533b733e91f9a2ee46fc9f44d8ee810ef7f86f4feeefc40d" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.696645 4982 scope.go:117] "RemoveContainer" containerID="6257371c3cf6cbe7e0e138c5eb843e130e1a3e4167df86b6b6b3e2a3628c327e" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.724982 4982 scope.go:117] "RemoveContainer" containerID="43ef017812b2195689b5f221ee9b4e25977e91a029129610351a5c738e3c42b6" Jan 23 09:43:46 crc kubenswrapper[4982]: I0123 09:43:46.749000 4982 scope.go:117] "RemoveContainer" containerID="bac8b59dbbea746723c3f02aa1f06b301d4b21c1b1b8f2c229db32d478339acc" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.395449 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-f8fsb"] Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.397404 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.439162 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-f8fsb"] Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.494966 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7gf\" (UniqueName: \"kubernetes.io/projected/825b7887-ce31-452f-aaf6-450627701e7f-kube-api-access-7n7gf\") pod \"aodh-db-create-f8fsb\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.495063 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825b7887-ce31-452f-aaf6-450627701e7f-operator-scripts\") pod \"aodh-db-create-f8fsb\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.502931 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-c016-account-create-update-c97cm"] Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.504267 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.507030 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.515241 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-c016-account-create-update-c97cm"] Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.597434 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7gf\" (UniqueName: \"kubernetes.io/projected/825b7887-ce31-452f-aaf6-450627701e7f-kube-api-access-7n7gf\") pod \"aodh-db-create-f8fsb\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.597550 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825b7887-ce31-452f-aaf6-450627701e7f-operator-scripts\") pod \"aodh-db-create-f8fsb\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.598303 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825b7887-ce31-452f-aaf6-450627701e7f-operator-scripts\") pod \"aodh-db-create-f8fsb\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.621307 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7gf\" (UniqueName: \"kubernetes.io/projected/825b7887-ce31-452f-aaf6-450627701e7f-kube-api-access-7n7gf\") pod \"aodh-db-create-f8fsb\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.699226 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d991f-28e4-4c77-9e2c-98edf4142b9b-operator-scripts\") pod \"aodh-c016-account-create-update-c97cm\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.699591 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zqp\" (UniqueName: \"kubernetes.io/projected/355d991f-28e4-4c77-9e2c-98edf4142b9b-kube-api-access-f6zqp\") pod \"aodh-c016-account-create-update-c97cm\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.714997 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.801473 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zqp\" (UniqueName: \"kubernetes.io/projected/355d991f-28e4-4c77-9e2c-98edf4142b9b-kube-api-access-f6zqp\") pod \"aodh-c016-account-create-update-c97cm\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.802577 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d991f-28e4-4c77-9e2c-98edf4142b9b-operator-scripts\") pod \"aodh-c016-account-create-update-c97cm\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.803480 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d991f-28e4-4c77-9e2c-98edf4142b9b-operator-scripts\") pod \"aodh-c016-account-create-update-c97cm\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:49 crc kubenswrapper[4982]: I0123 09:43:49.847980 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zqp\" (UniqueName: \"kubernetes.io/projected/355d991f-28e4-4c77-9e2c-98edf4142b9b-kube-api-access-f6zqp\") pod \"aodh-c016-account-create-update-c97cm\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.127050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.320090 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-f8fsb"] Jan 23 09:43:50 crc kubenswrapper[4982]: W0123 09:43:50.325299 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod825b7887_ce31_452f_aaf6_450627701e7f.slice/crio-1feb6f7a103c9bef2ed8f9e0259199b6f0cf1cc8ef5d36ae18f462182c93ab6a WatchSource:0}: Error finding container 1feb6f7a103c9bef2ed8f9e0259199b6f0cf1cc8ef5d36ae18f462182c93ab6a: Status 404 returned error can't find the container with id 1feb6f7a103c9bef2ed8f9e0259199b6f0cf1cc8ef5d36ae18f462182c93ab6a Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.650114 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-c016-account-create-update-c97cm"] Jan 23 09:43:50 crc kubenswrapper[4982]: W0123 09:43:50.650663 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod355d991f_28e4_4c77_9e2c_98edf4142b9b.slice/crio-4c1fae7b4c11cffbaf669114a24cc45b00ac374050302196a54db047a6cf45ea WatchSource:0}: Error finding container 4c1fae7b4c11cffbaf669114a24cc45b00ac374050302196a54db047a6cf45ea: Status 404 returned error can't find the container with id 4c1fae7b4c11cffbaf669114a24cc45b00ac374050302196a54db047a6cf45ea Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.753574 4982 generic.go:334] "Generic (PLEG): container finished" podID="825b7887-ce31-452f-aaf6-450627701e7f" containerID="61352dfbba815769d0ed9888825a809a01ef0ae7e2bacb0d731e94c7d3569c4a" exitCode=0 Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.753654 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f8fsb" event={"ID":"825b7887-ce31-452f-aaf6-450627701e7f","Type":"ContainerDied","Data":"61352dfbba815769d0ed9888825a809a01ef0ae7e2bacb0d731e94c7d3569c4a"} Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.753679 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f8fsb" event={"ID":"825b7887-ce31-452f-aaf6-450627701e7f","Type":"ContainerStarted","Data":"1feb6f7a103c9bef2ed8f9e0259199b6f0cf1cc8ef5d36ae18f462182c93ab6a"} Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.757659 4982 generic.go:334] "Generic (PLEG): container finished" podID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerID="5db0aaf8614693088c86f19271dad8cadf245e6166f674cb9837a9b6d64b22cc" exitCode=0 Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.757749 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerDied","Data":"5db0aaf8614693088c86f19271dad8cadf245e6166f674cb9837a9b6d64b22cc"} Jan 23 09:43:50 crc kubenswrapper[4982]: I0123 09:43:50.759554 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-c016-account-create-update-c97cm" event={"ID":"355d991f-28e4-4c77-9e2c-98edf4142b9b","Type":"ContainerStarted","Data":"4c1fae7b4c11cffbaf669114a24cc45b00ac374050302196a54db047a6cf45ea"} Jan 23 09:43:51 crc kubenswrapper[4982]: I0123 09:43:51.778237 4982 generic.go:334] "Generic (PLEG): container finished" podID="355d991f-28e4-4c77-9e2c-98edf4142b9b" containerID="abd9b2b03dafba2dc3a39a20cc3dfb03197779569c0a7f39432f861b9751cdaa" exitCode=0 Jan 23 09:43:51 crc kubenswrapper[4982]: I0123 09:43:51.778795 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-c016-account-create-update-c97cm" event={"ID":"355d991f-28e4-4c77-9e2c-98edf4142b9b","Type":"ContainerDied","Data":"abd9b2b03dafba2dc3a39a20cc3dfb03197779569c0a7f39432f861b9751cdaa"} Jan 23 09:43:51 crc kubenswrapper[4982]: I0123 09:43:51.783435 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerStarted","Data":"541469452be913385126ddf7b3bdb454fe361462b6bc4e097c85494c52ac27d9"} Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.288472 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.380558 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825b7887-ce31-452f-aaf6-450627701e7f-operator-scripts\") pod \"825b7887-ce31-452f-aaf6-450627701e7f\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.384233 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825b7887-ce31-452f-aaf6-450627701e7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "825b7887-ce31-452f-aaf6-450627701e7f" (UID: "825b7887-ce31-452f-aaf6-450627701e7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.390119 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7gf\" (UniqueName: \"kubernetes.io/projected/825b7887-ce31-452f-aaf6-450627701e7f-kube-api-access-7n7gf\") pod \"825b7887-ce31-452f-aaf6-450627701e7f\" (UID: \"825b7887-ce31-452f-aaf6-450627701e7f\") " Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.391068 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825b7887-ce31-452f-aaf6-450627701e7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.417897 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825b7887-ce31-452f-aaf6-450627701e7f-kube-api-access-7n7gf" (OuterVolumeSpecName: "kube-api-access-7n7gf") pod "825b7887-ce31-452f-aaf6-450627701e7f" (UID: "825b7887-ce31-452f-aaf6-450627701e7f"). InnerVolumeSpecName "kube-api-access-7n7gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.493217 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7gf\" (UniqueName: \"kubernetes.io/projected/825b7887-ce31-452f-aaf6-450627701e7f-kube-api-access-7n7gf\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.797207 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-f8fsb" Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.798699 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-f8fsb" event={"ID":"825b7887-ce31-452f-aaf6-450627701e7f","Type":"ContainerDied","Data":"1feb6f7a103c9bef2ed8f9e0259199b6f0cf1cc8ef5d36ae18f462182c93ab6a"} Jan 23 09:43:52 crc kubenswrapper[4982]: I0123 09:43:52.798765 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1feb6f7a103c9bef2ed8f9e0259199b6f0cf1cc8ef5d36ae18f462182c93ab6a" Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.201287 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.311619 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6zqp\" (UniqueName: \"kubernetes.io/projected/355d991f-28e4-4c77-9e2c-98edf4142b9b-kube-api-access-f6zqp\") pod \"355d991f-28e4-4c77-9e2c-98edf4142b9b\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.311788 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d991f-28e4-4c77-9e2c-98edf4142b9b-operator-scripts\") pod \"355d991f-28e4-4c77-9e2c-98edf4142b9b\" (UID: \"355d991f-28e4-4c77-9e2c-98edf4142b9b\") " Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.313207 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355d991f-28e4-4c77-9e2c-98edf4142b9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "355d991f-28e4-4c77-9e2c-98edf4142b9b" (UID: "355d991f-28e4-4c77-9e2c-98edf4142b9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.319664 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355d991f-28e4-4c77-9e2c-98edf4142b9b-kube-api-access-f6zqp" (OuterVolumeSpecName: "kube-api-access-f6zqp") pod "355d991f-28e4-4c77-9e2c-98edf4142b9b" (UID: "355d991f-28e4-4c77-9e2c-98edf4142b9b"). InnerVolumeSpecName "kube-api-access-f6zqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.414280 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6zqp\" (UniqueName: \"kubernetes.io/projected/355d991f-28e4-4c77-9e2c-98edf4142b9b-kube-api-access-f6zqp\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.414328 4982 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d991f-28e4-4c77-9e2c-98edf4142b9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.808024 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-c016-account-create-update-c97cm" event={"ID":"355d991f-28e4-4c77-9e2c-98edf4142b9b","Type":"ContainerDied","Data":"4c1fae7b4c11cffbaf669114a24cc45b00ac374050302196a54db047a6cf45ea"} Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.808059 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1fae7b4c11cffbaf669114a24cc45b00ac374050302196a54db047a6cf45ea" Jan 23 09:43:53 crc kubenswrapper[4982]: I0123 09:43:53.809132 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-c016-account-create-update-c97cm" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.845270 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-5r942"] Jan 23 09:43:54 crc kubenswrapper[4982]: E0123 09:43:54.846368 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355d991f-28e4-4c77-9e2c-98edf4142b9b" containerName="mariadb-account-create-update" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.846393 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="355d991f-28e4-4c77-9e2c-98edf4142b9b" containerName="mariadb-account-create-update" Jan 23 09:43:54 crc kubenswrapper[4982]: E0123 09:43:54.846420 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825b7887-ce31-452f-aaf6-450627701e7f" containerName="mariadb-database-create" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.846427 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="825b7887-ce31-452f-aaf6-450627701e7f" containerName="mariadb-database-create" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.846736 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="355d991f-28e4-4c77-9e2c-98edf4142b9b" containerName="mariadb-account-create-update" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.846770 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="825b7887-ce31-452f-aaf6-450627701e7f" containerName="mariadb-database-create" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.847540 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.861753 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-ljj56" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.862104 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.862329 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.862497 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.889035 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5r942"] Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.946301 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66hzz\" (UniqueName: \"kubernetes.io/projected/75f4ee61-f493-4560-91a7-229558683ade-kube-api-access-66hzz\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.946389 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-scripts\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.946408 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-combined-ca-bundle\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:54 crc kubenswrapper[4982]: I0123 09:43:54.946637 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-config-data\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.048651 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-config-data\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.048728 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66hzz\" (UniqueName: \"kubernetes.io/projected/75f4ee61-f493-4560-91a7-229558683ade-kube-api-access-66hzz\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.048774 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-scripts\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.048791 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-combined-ca-bundle\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.053697 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-combined-ca-bundle\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.054043 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-scripts\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.068720 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-config-data\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.069231 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66hzz\" (UniqueName: \"kubernetes.io/projected/75f4ee61-f493-4560-91a7-229558683ade-kube-api-access-66hzz\") pod \"aodh-db-sync-5r942\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.192391 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5r942" Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.781037 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-5r942"] Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.859201 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5r942" event={"ID":"75f4ee61-f493-4560-91a7-229558683ade","Type":"ContainerStarted","Data":"f2a221977a5b2882322866d16c33e70e744fe9e5fe70be94c3995bdbc686322d"} Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.859260 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerStarted","Data":"e5c87e2c317123d6dc9ce1b471587570860b8a93297ae21c1a13adc28d48efdc"} Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.859279 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerStarted","Data":"26d74aa4c8d780b072913e174f7c88e35016dbae2d2bf2798000e8f0310778a5"} Jan 23 09:43:55 crc kubenswrapper[4982]: I0123 09:43:55.906369 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.906349369 podStartE2EDuration="17.906349369s" podCreationTimestamp="2026-01-23 09:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:43:55.893880066 +0000 UTC m=+6510.374243452" watchObservedRunningTime="2026-01-23 09:43:55.906349369 +0000 UTC m=+6510.386712755" Jan 23 09:43:57 crc kubenswrapper[4982]: I0123 09:43:57.828524 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:43:57 crc kubenswrapper[4982]: E0123 09:43:57.829485 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:43:59 crc kubenswrapper[4982]: I0123 09:43:59.309188 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 09:44:01 crc kubenswrapper[4982]: I0123 09:44:01.905472 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5r942" event={"ID":"75f4ee61-f493-4560-91a7-229558683ade","Type":"ContainerStarted","Data":"e80fd41ea97677ad5986663394776db9499cdfc6c98155f55f5e5fd42e3ed4af"} Jan 23 09:44:01 crc kubenswrapper[4982]: I0123 09:44:01.930957 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-5r942" podStartSLOduration=2.243124406 podStartE2EDuration="7.930937031s" podCreationTimestamp="2026-01-23 09:43:54 +0000 UTC" firstStartedPulling="2026-01-23 09:43:55.787762849 +0000 UTC m=+6510.268126235" lastFinishedPulling="2026-01-23 09:44:01.475575474 +0000 UTC m=+6515.955938860" observedRunningTime="2026-01-23 09:44:01.918382786 +0000 UTC m=+6516.398746192" watchObservedRunningTime="2026-01-23 09:44:01.930937031 +0000 UTC m=+6516.411300417" Jan 23 09:44:03 crc kubenswrapper[4982]: I0123 09:44:03.931063 4982 generic.go:334] "Generic (PLEG): container finished" podID="75f4ee61-f493-4560-91a7-229558683ade" containerID="e80fd41ea97677ad5986663394776db9499cdfc6c98155f55f5e5fd42e3ed4af" exitCode=0 Jan 23 09:44:03 crc kubenswrapper[4982]: I0123 09:44:03.931141 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5r942" event={"ID":"75f4ee61-f493-4560-91a7-229558683ade","Type":"ContainerDied","Data":"e80fd41ea97677ad5986663394776db9499cdfc6c98155f55f5e5fd42e3ed4af"} Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.341976 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5r942" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.441200 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66hzz\" (UniqueName: \"kubernetes.io/projected/75f4ee61-f493-4560-91a7-229558683ade-kube-api-access-66hzz\") pod \"75f4ee61-f493-4560-91a7-229558683ade\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.441304 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-combined-ca-bundle\") pod \"75f4ee61-f493-4560-91a7-229558683ade\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.441435 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-config-data\") pod \"75f4ee61-f493-4560-91a7-229558683ade\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.441608 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-scripts\") pod \"75f4ee61-f493-4560-91a7-229558683ade\" (UID: \"75f4ee61-f493-4560-91a7-229558683ade\") " Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.447096 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f4ee61-f493-4560-91a7-229558683ade-kube-api-access-66hzz" (OuterVolumeSpecName: "kube-api-access-66hzz") pod "75f4ee61-f493-4560-91a7-229558683ade" (UID: "75f4ee61-f493-4560-91a7-229558683ade"). InnerVolumeSpecName "kube-api-access-66hzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.459893 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-scripts" (OuterVolumeSpecName: "scripts") pod "75f4ee61-f493-4560-91a7-229558683ade" (UID: "75f4ee61-f493-4560-91a7-229558683ade"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.477036 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-config-data" (OuterVolumeSpecName: "config-data") pod "75f4ee61-f493-4560-91a7-229558683ade" (UID: "75f4ee61-f493-4560-91a7-229558683ade"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.479515 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75f4ee61-f493-4560-91a7-229558683ade" (UID: "75f4ee61-f493-4560-91a7-229558683ade"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.546231 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.546266 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66hzz\" (UniqueName: \"kubernetes.io/projected/75f4ee61-f493-4560-91a7-229558683ade-kube-api-access-66hzz\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.546276 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.546284 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f4ee61-f493-4560-91a7-229558683ade-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.950025 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-5r942" event={"ID":"75f4ee61-f493-4560-91a7-229558683ade","Type":"ContainerDied","Data":"f2a221977a5b2882322866d16c33e70e744fe9e5fe70be94c3995bdbc686322d"} Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.950063 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a221977a5b2882322866d16c33e70e744fe9e5fe70be94c3995bdbc686322d" Jan 23 09:44:05 crc kubenswrapper[4982]: I0123 09:44:05.950075 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-5r942" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.280181 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.310354 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.326983 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.492030 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:09 crc kubenswrapper[4982]: E0123 09:44:09.492700 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f4ee61-f493-4560-91a7-229558683ade" containerName="aodh-db-sync" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.492725 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f4ee61-f493-4560-91a7-229558683ade" containerName="aodh-db-sync" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.493073 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f4ee61-f493-4560-91a7-229558683ade" containerName="aodh-db-sync" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.515789 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.521090 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.521132 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.521462 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-ljj56" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.527957 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.602033 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-combined-ca-bundle\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.603001 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-config-data\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.603055 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-scripts\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.603168 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnl6m\" (UniqueName: \"kubernetes.io/projected/433688d5-2a4f-4794-b49d-e653b6093e87-kube-api-access-pnl6m\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.706723 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnl6m\" (UniqueName: \"kubernetes.io/projected/433688d5-2a4f-4794-b49d-e653b6093e87-kube-api-access-pnl6m\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.706896 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-combined-ca-bundle\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.707008 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-config-data\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.707029 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-scripts\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.716295 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-scripts\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.716965 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-config-data\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.733131 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnl6m\" (UniqueName: \"kubernetes.io/projected/433688d5-2a4f-4794-b49d-e653b6093e87-kube-api-access-pnl6m\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.744197 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-combined-ca-bundle\") pod \"aodh-0\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " pod="openstack/aodh-0" Jan 23 09:44:09 crc kubenswrapper[4982]: I0123 09:44:09.866532 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 09:44:10 crc kubenswrapper[4982]: I0123 09:44:10.019073 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 09:44:10 crc kubenswrapper[4982]: I0123 09:44:10.515918 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:11 crc kubenswrapper[4982]: I0123 09:44:11.033213 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerStarted","Data":"d9b9b060728118d4d90f1e0fd773c683552b5a52ebcb4aaf5696804f5209ce83"} Jan 23 09:44:11 crc kubenswrapper[4982]: I0123 09:44:11.827516 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:44:11 crc kubenswrapper[4982]: E0123 09:44:11.828086 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:44:12 crc kubenswrapper[4982]: I0123 09:44:12.047455 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerStarted","Data":"8be3801089a0054ee5584da7dde8ac900b4bcc44c60ce7b3f2b94608fd6f1ca6"} Jan 23 09:44:12 crc kubenswrapper[4982]: I0123 09:44:12.271788 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:12 crc kubenswrapper[4982]: I0123 09:44:12.272074 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-central-agent" containerID="cri-o://b7f4a9fb18d808595ee0c6ce5168471ae3c7d3ee93dce2fac68bd5c1304b452c" gracePeriod=30 Jan 23 09:44:12 crc kubenswrapper[4982]: I0123 09:44:12.272512 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="proxy-httpd" containerID="cri-o://d99d486daf78654482d841633af77cf1bcfd36517f6d2e08e1da3d69c5581d4f" gracePeriod=30 Jan 23 09:44:12 crc kubenswrapper[4982]: I0123 09:44:12.272596 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-notification-agent" containerID="cri-o://f79f5572265681bd0ab28b09f6e1d054441ae64d68ea24d4fcc7e1c40bf989ef" gracePeriod=30 Jan 23 09:44:12 crc kubenswrapper[4982]: I0123 09:44:12.272736 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="sg-core" containerID="cri-o://da75e0219eb2f880f6a91082691fb9bdd4022e361ee506d33dfaa422dcd5c5e0" gracePeriod=30 Jan 23 09:44:13 crc kubenswrapper[4982]: I0123 09:44:13.030179 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:13 crc kubenswrapper[4982]: I0123 09:44:13.064980 4982 generic.go:334] "Generic (PLEG): container finished" podID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerID="da75e0219eb2f880f6a91082691fb9bdd4022e361ee506d33dfaa422dcd5c5e0" exitCode=2 Jan 23 09:44:13 crc kubenswrapper[4982]: I0123 09:44:13.065029 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerDied","Data":"da75e0219eb2f880f6a91082691fb9bdd4022e361ee506d33dfaa422dcd5c5e0"} Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.079164 4982 generic.go:334] "Generic (PLEG): container finished" podID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerID="d99d486daf78654482d841633af77cf1bcfd36517f6d2e08e1da3d69c5581d4f" exitCode=0 Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.079946 4982 generic.go:334] "Generic (PLEG): container finished" podID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerID="f79f5572265681bd0ab28b09f6e1d054441ae64d68ea24d4fcc7e1c40bf989ef" exitCode=0 Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.080010 4982 generic.go:334] "Generic (PLEG): container finished" podID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerID="b7f4a9fb18d808595ee0c6ce5168471ae3c7d3ee93dce2fac68bd5c1304b452c" exitCode=0 Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.079231 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerDied","Data":"d99d486daf78654482d841633af77cf1bcfd36517f6d2e08e1da3d69c5581d4f"} Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.080174 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerDied","Data":"f79f5572265681bd0ab28b09f6e1d054441ae64d68ea24d4fcc7e1c40bf989ef"} Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.080235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerDied","Data":"b7f4a9fb18d808595ee0c6ce5168471ae3c7d3ee93dce2fac68bd5c1304b452c"} Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.895455 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.980030 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-log-httpd\") pod \"114a886f-cd3c-460e-93a2-9f459ea999d7\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.980391 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-run-httpd\") pod \"114a886f-cd3c-460e-93a2-9f459ea999d7\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.980481 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-sg-core-conf-yaml\") pod \"114a886f-cd3c-460e-93a2-9f459ea999d7\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.981101 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "114a886f-cd3c-460e-93a2-9f459ea999d7" (UID: "114a886f-cd3c-460e-93a2-9f459ea999d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.981989 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "114a886f-cd3c-460e-93a2-9f459ea999d7" (UID: "114a886f-cd3c-460e-93a2-9f459ea999d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.984685 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:14 crc kubenswrapper[4982]: I0123 09:44:14.984753 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/114a886f-cd3c-460e-93a2-9f459ea999d7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.103874 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "114a886f-cd3c-460e-93a2-9f459ea999d7" (UID: "114a886f-cd3c-460e-93a2-9f459ea999d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.106417 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp7c9\" (UniqueName: \"kubernetes.io/projected/114a886f-cd3c-460e-93a2-9f459ea999d7-kube-api-access-dp7c9\") pod \"114a886f-cd3c-460e-93a2-9f459ea999d7\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.106563 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-config-data\") pod \"114a886f-cd3c-460e-93a2-9f459ea999d7\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.106660 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-combined-ca-bundle\") pod \"114a886f-cd3c-460e-93a2-9f459ea999d7\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.106692 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-scripts\") pod \"114a886f-cd3c-460e-93a2-9f459ea999d7\" (UID: \"114a886f-cd3c-460e-93a2-9f459ea999d7\") " Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.108543 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.114168 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/114a886f-cd3c-460e-93a2-9f459ea999d7-kube-api-access-dp7c9" (OuterVolumeSpecName: "kube-api-access-dp7c9") pod "114a886f-cd3c-460e-93a2-9f459ea999d7" (UID: "114a886f-cd3c-460e-93a2-9f459ea999d7"). InnerVolumeSpecName "kube-api-access-dp7c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.131734 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-scripts" (OuterVolumeSpecName: "scripts") pod "114a886f-cd3c-460e-93a2-9f459ea999d7" (UID: "114a886f-cd3c-460e-93a2-9f459ea999d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.156891 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"114a886f-cd3c-460e-93a2-9f459ea999d7","Type":"ContainerDied","Data":"26e320bc4bfa5f7786a4a5570dc699bf3e64152a98c4f4d4e6dce07fb11a76a5"} Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.156944 4982 scope.go:117] "RemoveContainer" containerID="d99d486daf78654482d841633af77cf1bcfd36517f6d2e08e1da3d69c5581d4f" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.157092 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.211459 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp7c9\" (UniqueName: \"kubernetes.io/projected/114a886f-cd3c-460e-93a2-9f459ea999d7-kube-api-access-dp7c9\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.211711 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.228980 4982 scope.go:117] "RemoveContainer" containerID="da75e0219eb2f880f6a91082691fb9bdd4022e361ee506d33dfaa422dcd5c5e0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.243878 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "114a886f-cd3c-460e-93a2-9f459ea999d7" (UID: "114a886f-cd3c-460e-93a2-9f459ea999d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.256858 4982 scope.go:117] "RemoveContainer" containerID="f79f5572265681bd0ab28b09f6e1d054441ae64d68ea24d4fcc7e1c40bf989ef" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.265395 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-config-data" (OuterVolumeSpecName: "config-data") pod "114a886f-cd3c-460e-93a2-9f459ea999d7" (UID: "114a886f-cd3c-460e-93a2-9f459ea999d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.280305 4982 scope.go:117] "RemoveContainer" containerID="b7f4a9fb18d808595ee0c6ce5168471ae3c7d3ee93dce2fac68bd5c1304b452c" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.313882 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.313930 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/114a886f-cd3c-460e-93a2-9f459ea999d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.504145 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.526516 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.553829 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:15 crc kubenswrapper[4982]: E0123 09:44:15.554460 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-central-agent" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.554479 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-central-agent" Jan 23 09:44:15 crc kubenswrapper[4982]: E0123 09:44:15.554498 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="sg-core" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.554506 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="sg-core" Jan 23 09:44:15 crc kubenswrapper[4982]: E0123 09:44:15.554525 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="proxy-httpd" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.554532 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="proxy-httpd" Jan 23 09:44:15 crc kubenswrapper[4982]: E0123 09:44:15.554546 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-notification-agent" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.554554 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-notification-agent" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.557984 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="sg-core" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.558325 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-central-agent" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.558344 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="proxy-httpd" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.558387 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" containerName="ceilometer-notification-agent" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.578068 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.580102 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.580914 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.590471 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.620250 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-scripts\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.620385 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7xg\" (UniqueName: \"kubernetes.io/projected/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-kube-api-access-nt7xg\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.620407 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-log-httpd\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.620454 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-run-httpd\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.620515 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.620531 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-config-data\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.620555 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.722190 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-config-data\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.722252 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.722302 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-scripts\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.722413 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt7xg\" (UniqueName: \"kubernetes.io/projected/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-kube-api-access-nt7xg\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.722431 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-log-httpd\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.722468 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-run-httpd\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.722518 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.723225 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-log-httpd\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.725927 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-run-httpd\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.727942 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-config-data\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.730006 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-scripts\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.730793 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.742745 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.743460 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt7xg\" (UniqueName: \"kubernetes.io/projected/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-kube-api-access-nt7xg\") pod \"ceilometer-0\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " pod="openstack/ceilometer-0" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.850814 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="114a886f-cd3c-460e-93a2-9f459ea999d7" path="/var/lib/kubelet/pods/114a886f-cd3c-460e-93a2-9f459ea999d7/volumes" Jan 23 09:44:15 crc kubenswrapper[4982]: I0123 09:44:15.910353 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:16 crc kubenswrapper[4982]: I0123 09:44:16.186570 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerStarted","Data":"da09384ad1ca1ec3bfbaa53fe1befaef1b13d77605c45ee9b609b6f47154c06a"} Jan 23 09:44:16 crc kubenswrapper[4982]: W0123 09:44:16.587608 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda35a2f0_7e5a_4428_aab4_09ca3b70a09e.slice/crio-222112b5fe271b45cbea76bfbdc497488583d10c4c5c25c541e35a6a6ee08a4e WatchSource:0}: Error finding container 222112b5fe271b45cbea76bfbdc497488583d10c4c5c25c541e35a6a6ee08a4e: Status 404 returned error can't find the container with id 222112b5fe271b45cbea76bfbdc497488583d10c4c5c25c541e35a6a6ee08a4e Jan 23 09:44:16 crc kubenswrapper[4982]: I0123 09:44:16.592354 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:16 crc kubenswrapper[4982]: I0123 09:44:16.955247 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:17 crc kubenswrapper[4982]: I0123 09:44:17.200964 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerStarted","Data":"222112b5fe271b45cbea76bfbdc497488583d10c4c5c25c541e35a6a6ee08a4e"} Jan 23 09:44:18 crc kubenswrapper[4982]: I0123 09:44:18.215879 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerStarted","Data":"05e51b73f4ede1dffa500218a4d8c855dc9881b7b9c3d50413d6f2c1caef11d1"} Jan 23 09:44:18 crc kubenswrapper[4982]: I0123 09:44:18.218172 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerStarted","Data":"2e1dacf3f695d2c0880a809b4298a48d44f13743baefac5b382eca8561a94993"} Jan 23 09:44:19 crc kubenswrapper[4982]: I0123 09:44:19.239385 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerStarted","Data":"fc8aa09390218c289aaf66e79f5af582aad7468249a98b625bc4c757e62bd320"} Jan 23 09:44:19 crc kubenswrapper[4982]: I0123 09:44:19.500661 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:44:19 crc kubenswrapper[4982]: I0123 09:44:19.501191 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="18a97dbf-5734-4e1a-a8d6-685b06329521" containerName="kube-state-metrics" containerID="cri-o://259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9" gracePeriod=30 Jan 23 09:44:19 crc kubenswrapper[4982]: E0123 09:44:19.641364 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18a97dbf_5734_4e1a_a8d6_685b06329521.slice/crio-259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.179380 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.265640 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgs7t\" (UniqueName: \"kubernetes.io/projected/18a97dbf-5734-4e1a-a8d6-685b06329521-kube-api-access-mgs7t\") pod \"18a97dbf-5734-4e1a-a8d6-685b06329521\" (UID: \"18a97dbf-5734-4e1a-a8d6-685b06329521\") " Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.279003 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a97dbf-5734-4e1a-a8d6-685b06329521-kube-api-access-mgs7t" (OuterVolumeSpecName: "kube-api-access-mgs7t") pod "18a97dbf-5734-4e1a-a8d6-685b06329521" (UID: "18a97dbf-5734-4e1a-a8d6-685b06329521"). InnerVolumeSpecName "kube-api-access-mgs7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.295077 4982 generic.go:334] "Generic (PLEG): container finished" podID="18a97dbf-5734-4e1a-a8d6-685b06329521" containerID="259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9" exitCode=2 Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.295168 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"18a97dbf-5734-4e1a-a8d6-685b06329521","Type":"ContainerDied","Data":"259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9"} Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.295194 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"18a97dbf-5734-4e1a-a8d6-685b06329521","Type":"ContainerDied","Data":"0506b5bd940ab33e43a7425716e4f0e441cc28d2035083da2b6cf9638ddcdbb0"} Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.295209 4982 scope.go:117] "RemoveContainer" containerID="259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.295357 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.322773 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerStarted","Data":"8edf98ae4e4a6c484cd3a1655862fca76d88f9bb4ea45fc9366b09c62aef0639"} Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.322864 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-api" containerID="cri-o://8be3801089a0054ee5584da7dde8ac900b4bcc44c60ce7b3f2b94608fd6f1ca6" gracePeriod=30 Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.322887 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-listener" containerID="cri-o://8edf98ae4e4a6c484cd3a1655862fca76d88f9bb4ea45fc9366b09c62aef0639" gracePeriod=30 Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.322943 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-notifier" containerID="cri-o://05e51b73f4ede1dffa500218a4d8c855dc9881b7b9c3d50413d6f2c1caef11d1" gracePeriod=30 Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.322980 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-evaluator" containerID="cri-o://da09384ad1ca1ec3bfbaa53fe1befaef1b13d77605c45ee9b609b6f47154c06a" gracePeriod=30 Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.355388 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.773281014 podStartE2EDuration="11.355371605s" podCreationTimestamp="2026-01-23 09:44:09 +0000 UTC" firstStartedPulling="2026-01-23 09:44:10.487337716 +0000 UTC m=+6524.967701102" lastFinishedPulling="2026-01-23 09:44:19.069428307 +0000 UTC m=+6533.549791693" observedRunningTime="2026-01-23 09:44:20.348985809 +0000 UTC m=+6534.829349195" watchObservedRunningTime="2026-01-23 09:44:20.355371605 +0000 UTC m=+6534.835734991" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.359453 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerStarted","Data":"a31c5ff4cf654df6255a108738385f92ba5f979201712e9f1fea8ac9d1979149"} Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.369983 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgs7t\" (UniqueName: \"kubernetes.io/projected/18a97dbf-5734-4e1a-a8d6-685b06329521-kube-api-access-mgs7t\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.428484 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.452543 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.478460 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:44:20 crc kubenswrapper[4982]: E0123 09:44:20.479231 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a97dbf-5734-4e1a-a8d6-685b06329521" containerName="kube-state-metrics" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.479258 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a97dbf-5734-4e1a-a8d6-685b06329521" containerName="kube-state-metrics" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.479559 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a97dbf-5734-4e1a-a8d6-685b06329521" containerName="kube-state-metrics" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.480504 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.482881 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.483226 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.528896 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.540016 4982 scope.go:117] "RemoveContainer" containerID="259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9" Jan 23 09:44:20 crc kubenswrapper[4982]: E0123 09:44:20.540408 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9\": container with ID starting with 259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9 not found: ID does not exist" containerID="259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.540451 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9"} err="failed to get container status \"259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9\": rpc error: code = NotFound desc = could not find container \"259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9\": container with ID starting with 259de8863401b4e37b210d759c3b590eef917afab9786d1ceabae0c4466b0fc9 not found: ID does not exist" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.585077 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.585224 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.585255 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqwp\" (UniqueName: \"kubernetes.io/projected/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-api-access-glqwp\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.585349 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.687649 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.687703 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glqwp\" (UniqueName: \"kubernetes.io/projected/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-api-access-glqwp\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.687786 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.687876 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.692488 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.692516 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.713016 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a4acf4-9689-4375-b94f-fad12c76e1d8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.715428 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqwp\" (UniqueName: \"kubernetes.io/projected/90a4acf4-9689-4375-b94f-fad12c76e1d8-kube-api-access-glqwp\") pod \"kube-state-metrics-0\" (UID: \"90a4acf4-9689-4375-b94f-fad12c76e1d8\") " pod="openstack/kube-state-metrics-0" Jan 23 09:44:20 crc kubenswrapper[4982]: I0123 09:44:20.832865 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 09:44:21 crc kubenswrapper[4982]: I0123 09:44:21.416467 4982 generic.go:334] "Generic (PLEG): container finished" podID="433688d5-2a4f-4794-b49d-e653b6093e87" containerID="da09384ad1ca1ec3bfbaa53fe1befaef1b13d77605c45ee9b609b6f47154c06a" exitCode=0 Jan 23 09:44:21 crc kubenswrapper[4982]: I0123 09:44:21.416987 4982 generic.go:334] "Generic (PLEG): container finished" podID="433688d5-2a4f-4794-b49d-e653b6093e87" containerID="8be3801089a0054ee5584da7dde8ac900b4bcc44c60ce7b3f2b94608fd6f1ca6" exitCode=0 Jan 23 09:44:21 crc kubenswrapper[4982]: I0123 09:44:21.416667 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerDied","Data":"da09384ad1ca1ec3bfbaa53fe1befaef1b13d77605c45ee9b609b6f47154c06a"} Jan 23 09:44:21 crc kubenswrapper[4982]: I0123 09:44:21.417026 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerDied","Data":"8be3801089a0054ee5584da7dde8ac900b4bcc44c60ce7b3f2b94608fd6f1ca6"} Jan 23 09:44:21 crc kubenswrapper[4982]: I0123 09:44:21.594799 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 09:44:21 crc kubenswrapper[4982]: I0123 09:44:21.843984 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a97dbf-5734-4e1a-a8d6-685b06329521" path="/var/lib/kubelet/pods/18a97dbf-5734-4e1a-a8d6-685b06329521/volumes" Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.445531 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerStarted","Data":"cd643317b33a3e02b6394fe094ad2afaa2c2ca61dae0816315a5501f7638073c"} Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.446128 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.445779 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-notification-agent" containerID="cri-o://fc8aa09390218c289aaf66e79f5af582aad7468249a98b625bc4c757e62bd320" gracePeriod=30 Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.445589 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-central-agent" containerID="cri-o://2e1dacf3f695d2c0880a809b4298a48d44f13743baefac5b382eca8561a94993" gracePeriod=30 Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.445812 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="proxy-httpd" containerID="cri-o://cd643317b33a3e02b6394fe094ad2afaa2c2ca61dae0816315a5501f7638073c" gracePeriod=30 Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.445790 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="sg-core" containerID="cri-o://a31c5ff4cf654df6255a108738385f92ba5f979201712e9f1fea8ac9d1979149" gracePeriod=30 Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.450109 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"90a4acf4-9689-4375-b94f-fad12c76e1d8","Type":"ContainerStarted","Data":"51745e3b5e4f15e6fb495f87bad279ecb724e67b35fef0c19e92492861c0ca49"} Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.450153 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"90a4acf4-9689-4375-b94f-fad12c76e1d8","Type":"ContainerStarted","Data":"20803b281966f409955e013dba072c2e48eba661b1d059ed8277d7671bdd0874"} Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.450535 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.490298 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.697753422 podStartE2EDuration="7.490277198s" podCreationTimestamp="2026-01-23 09:44:15 +0000 UTC" firstStartedPulling="2026-01-23 09:44:16.593191091 +0000 UTC m=+6531.073554477" lastFinishedPulling="2026-01-23 09:44:21.385714867 +0000 UTC m=+6535.866078253" observedRunningTime="2026-01-23 09:44:22.476835388 +0000 UTC m=+6536.957198774" watchObservedRunningTime="2026-01-23 09:44:22.490277198 +0000 UTC m=+6536.970640584" Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.511136 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.129657834 podStartE2EDuration="2.51110737s" podCreationTimestamp="2026-01-23 09:44:20 +0000 UTC" firstStartedPulling="2026-01-23 09:44:21.599542744 +0000 UTC m=+6536.079906130" lastFinishedPulling="2026-01-23 09:44:21.98099229 +0000 UTC m=+6536.461355666" observedRunningTime="2026-01-23 09:44:22.494411931 +0000 UTC m=+6536.974775307" watchObservedRunningTime="2026-01-23 09:44:22.51110737 +0000 UTC m=+6536.991470756" Jan 23 09:44:22 crc kubenswrapper[4982]: I0123 09:44:22.828673 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:44:22 crc kubenswrapper[4982]: E0123 09:44:22.829181 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:44:23 crc kubenswrapper[4982]: I0123 09:44:23.466951 4982 generic.go:334] "Generic (PLEG): container finished" podID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerID="cd643317b33a3e02b6394fe094ad2afaa2c2ca61dae0816315a5501f7638073c" exitCode=0 Jan 23 09:44:23 crc kubenswrapper[4982]: I0123 09:44:23.466992 4982 generic.go:334] "Generic (PLEG): container finished" podID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerID="a31c5ff4cf654df6255a108738385f92ba5f979201712e9f1fea8ac9d1979149" exitCode=2 Jan 23 09:44:23 crc kubenswrapper[4982]: I0123 09:44:23.467002 4982 generic.go:334] "Generic (PLEG): container finished" podID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerID="fc8aa09390218c289aaf66e79f5af582aad7468249a98b625bc4c757e62bd320" exitCode=0 Jan 23 09:44:23 crc kubenswrapper[4982]: I0123 09:44:23.467377 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerDied","Data":"cd643317b33a3e02b6394fe094ad2afaa2c2ca61dae0816315a5501f7638073c"} Jan 23 09:44:23 crc kubenswrapper[4982]: I0123 09:44:23.467451 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerDied","Data":"a31c5ff4cf654df6255a108738385f92ba5f979201712e9f1fea8ac9d1979149"} Jan 23 09:44:23 crc kubenswrapper[4982]: I0123 09:44:23.467465 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerDied","Data":"fc8aa09390218c289aaf66e79f5af582aad7468249a98b625bc4c757e62bd320"} Jan 23 09:44:24 crc kubenswrapper[4982]: I0123 09:44:24.053198 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wgjvd"] Jan 23 09:44:24 crc kubenswrapper[4982]: I0123 09:44:24.070174 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-8zsl9"] Jan 23 09:44:24 crc kubenswrapper[4982]: I0123 09:44:24.082892 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-8zsl9"] Jan 23 09:44:24 crc kubenswrapper[4982]: I0123 09:44:24.095087 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wgjvd"] Jan 23 09:44:25 crc kubenswrapper[4982]: I0123 09:44:25.841028 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40479474-e29f-4fd9-9880-47c78e8d3081" path="/var/lib/kubelet/pods/40479474-e29f-4fd9-9880-47c78e8d3081/volumes" Jan 23 09:44:25 crc kubenswrapper[4982]: I0123 09:44:25.842318 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6be0584-affd-47f3-8606-83e94c22c855" path="/var/lib/kubelet/pods/d6be0584-affd-47f3-8606-83e94c22c855/volumes" Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.139047 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d846-account-create-update-rhmzn"] Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.153007 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d92b-account-create-update-lx2gt"] Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.164295 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ba3d-account-create-update-w79nx"] Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.173599 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d846-account-create-update-rhmzn"] Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.185338 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-b6886"] Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.196903 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d92b-account-create-update-lx2gt"] Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.207445 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ba3d-account-create-update-w79nx"] Jan 23 09:44:26 crc kubenswrapper[4982]: I0123 09:44:26.217711 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-b6886"] Jan 23 09:44:27 crc kubenswrapper[4982]: I0123 09:44:27.518638 4982 generic.go:334] "Generic (PLEG): container finished" podID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerID="2e1dacf3f695d2c0880a809b4298a48d44f13743baefac5b382eca8561a94993" exitCode=0 Jan 23 09:44:27 crc kubenswrapper[4982]: I0123 09:44:27.518725 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerDied","Data":"2e1dacf3f695d2c0880a809b4298a48d44f13743baefac5b382eca8561a94993"} Jan 23 09:44:27 crc kubenswrapper[4982]: I0123 09:44:27.853635 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ce6f0af-af9b-44a0-8e98-188afaa9fa75" path="/var/lib/kubelet/pods/0ce6f0af-af9b-44a0-8e98-188afaa9fa75/volumes" Jan 23 09:44:27 crc kubenswrapper[4982]: I0123 09:44:27.855110 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ffef2a-10fa-4d0c-9b74-41acda0a4121" path="/var/lib/kubelet/pods/47ffef2a-10fa-4d0c-9b74-41acda0a4121/volumes" Jan 23 09:44:27 crc kubenswrapper[4982]: I0123 09:44:27.855953 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8075af5-7e6e-46b3-b08d-065cead6efb8" path="/var/lib/kubelet/pods/c8075af5-7e6e-46b3-b08d-065cead6efb8/volumes" Jan 23 09:44:27 crc kubenswrapper[4982]: I0123 09:44:27.856515 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccac9163-42c2-4c27-b714-f4aab14e3a12" path="/var/lib/kubelet/pods/ccac9163-42c2-4c27-b714-f4aab14e3a12/volumes" Jan 23 09:44:27 crc kubenswrapper[4982]: I0123 09:44:27.984766 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.132422 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt7xg\" (UniqueName: \"kubernetes.io/projected/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-kube-api-access-nt7xg\") pod \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.132504 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-scripts\") pod \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.132544 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-run-httpd\") pod \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.132720 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-log-httpd\") pod \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.132773 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-sg-core-conf-yaml\") pod \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.132818 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-combined-ca-bundle\") pod \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.132863 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-config-data\") pod \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\" (UID: \"da35a2f0-7e5a-4428-aab4-09ca3b70a09e\") " Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.133347 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "da35a2f0-7e5a-4428-aab4-09ca3b70a09e" (UID: "da35a2f0-7e5a-4428-aab4-09ca3b70a09e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.133788 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "da35a2f0-7e5a-4428-aab4-09ca3b70a09e" (UID: "da35a2f0-7e5a-4428-aab4-09ca3b70a09e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.139926 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-scripts" (OuterVolumeSpecName: "scripts") pod "da35a2f0-7e5a-4428-aab4-09ca3b70a09e" (UID: "da35a2f0-7e5a-4428-aab4-09ca3b70a09e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.143617 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-kube-api-access-nt7xg" (OuterVolumeSpecName: "kube-api-access-nt7xg") pod "da35a2f0-7e5a-4428-aab4-09ca3b70a09e" (UID: "da35a2f0-7e5a-4428-aab4-09ca3b70a09e"). InnerVolumeSpecName "kube-api-access-nt7xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.166191 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "da35a2f0-7e5a-4428-aab4-09ca3b70a09e" (UID: "da35a2f0-7e5a-4428-aab4-09ca3b70a09e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.235877 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt7xg\" (UniqueName: \"kubernetes.io/projected/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-kube-api-access-nt7xg\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.235913 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.235924 4982 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.235937 4982 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.235949 4982 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.241746 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da35a2f0-7e5a-4428-aab4-09ca3b70a09e" (UID: "da35a2f0-7e5a-4428-aab4-09ca3b70a09e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.279743 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-config-data" (OuterVolumeSpecName: "config-data") pod "da35a2f0-7e5a-4428-aab4-09ca3b70a09e" (UID: "da35a2f0-7e5a-4428-aab4-09ca3b70a09e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.338529 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.338561 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da35a2f0-7e5a-4428-aab4-09ca3b70a09e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.536587 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da35a2f0-7e5a-4428-aab4-09ca3b70a09e","Type":"ContainerDied","Data":"222112b5fe271b45cbea76bfbdc497488583d10c4c5c25c541e35a6a6ee08a4e"} Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.536928 4982 scope.go:117] "RemoveContainer" containerID="cd643317b33a3e02b6394fe094ad2afaa2c2ca61dae0816315a5501f7638073c" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.537104 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.598249 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.605896 4982 scope.go:117] "RemoveContainer" containerID="a31c5ff4cf654df6255a108738385f92ba5f979201712e9f1fea8ac9d1979149" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.611682 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.623103 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:28 crc kubenswrapper[4982]: E0123 09:44:28.623676 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="sg-core" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.623696 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="sg-core" Jan 23 09:44:28 crc kubenswrapper[4982]: E0123 09:44:28.623733 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="proxy-httpd" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.623739 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="proxy-httpd" Jan 23 09:44:28 crc kubenswrapper[4982]: E0123 09:44:28.623752 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-notification-agent" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.623767 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-notification-agent" Jan 23 09:44:28 crc kubenswrapper[4982]: E0123 09:44:28.623790 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-central-agent" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.623797 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-central-agent" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.624039 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-central-agent" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.624054 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="proxy-httpd" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.624073 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="ceilometer-notification-agent" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.624085 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" containerName="sg-core" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.626492 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.630244 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.631349 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.631748 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.633201 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.657823 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-config-data\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.657902 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.657983 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6373cfd-07f7-414a-b88f-b928c187d201-log-httpd\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.658368 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6373cfd-07f7-414a-b88f-b928c187d201-run-httpd\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.658509 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-scripts\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.658563 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.658746 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7fz2\" (UniqueName: \"kubernetes.io/projected/e6373cfd-07f7-414a-b88f-b928c187d201-kube-api-access-k7fz2\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.658800 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.659441 4982 scope.go:117] "RemoveContainer" containerID="fc8aa09390218c289aaf66e79f5af582aad7468249a98b625bc4c757e62bd320" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.680425 4982 scope.go:117] "RemoveContainer" containerID="2e1dacf3f695d2c0880a809b4298a48d44f13743baefac5b382eca8561a94993" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760368 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-scripts\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760428 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760497 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7fz2\" (UniqueName: \"kubernetes.io/projected/e6373cfd-07f7-414a-b88f-b928c187d201-kube-api-access-k7fz2\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760520 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760543 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-config-data\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760568 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760599 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6373cfd-07f7-414a-b88f-b928c187d201-log-httpd\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.760760 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6373cfd-07f7-414a-b88f-b928c187d201-run-httpd\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.761317 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6373cfd-07f7-414a-b88f-b928c187d201-log-httpd\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.762874 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e6373cfd-07f7-414a-b88f-b928c187d201-run-httpd\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.765177 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.766336 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.769414 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-scripts\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.774639 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-config-data\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.776960 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7fz2\" (UniqueName: \"kubernetes.io/projected/e6373cfd-07f7-414a-b88f-b928c187d201-kube-api-access-k7fz2\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.779054 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6373cfd-07f7-414a-b88f-b928c187d201-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e6373cfd-07f7-414a-b88f-b928c187d201\") " pod="openstack/ceilometer-0" Jan 23 09:44:28 crc kubenswrapper[4982]: I0123 09:44:28.953303 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 09:44:29 crc kubenswrapper[4982]: I0123 09:44:29.489335 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 09:44:29 crc kubenswrapper[4982]: I0123 09:44:29.549034 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerStarted","Data":"52253bf50f6c624ab8815c02c2e4a3d28b574158364649134aecc0248d5665ee"} Jan 23 09:44:29 crc kubenswrapper[4982]: I0123 09:44:29.852541 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da35a2f0-7e5a-4428-aab4-09ca3b70a09e" path="/var/lib/kubelet/pods/da35a2f0-7e5a-4428-aab4-09ca3b70a09e/volumes" Jan 23 09:44:30 crc kubenswrapper[4982]: I0123 09:44:30.560957 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerStarted","Data":"8582593d320eb1b6aefbbb745ffb294efe9145049bb2ac22b666a4f7f375008a"} Jan 23 09:44:30 crc kubenswrapper[4982]: I0123 09:44:30.841247 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 09:44:31 crc kubenswrapper[4982]: I0123 09:44:31.574500 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerStarted","Data":"b4f0237782f8735049b9fb2b6e770bbb1565cfe33e8f143cfdf7d63571b30eb3"} Jan 23 09:44:32 crc kubenswrapper[4982]: I0123 09:44:32.592503 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerStarted","Data":"472decb47fc5d21bebfad9c7f5969532cd95d532c538f27606274b8bb4c3641d"} Jan 23 09:44:34 crc kubenswrapper[4982]: I0123 09:44:34.619563 4982 generic.go:334] "Generic (PLEG): container finished" podID="433688d5-2a4f-4794-b49d-e653b6093e87" containerID="05e51b73f4ede1dffa500218a4d8c855dc9881b7b9c3d50413d6f2c1caef11d1" exitCode=0 Jan 23 09:44:34 crc kubenswrapper[4982]: I0123 09:44:34.619661 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerDied","Data":"05e51b73f4ede1dffa500218a4d8c855dc9881b7b9c3d50413d6f2c1caef11d1"} Jan 23 09:44:35 crc kubenswrapper[4982]: I0123 09:44:35.633419 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerStarted","Data":"dfd3eadd5f5dad3e7bbca5ae2436fd5d0433f3ebe2fb87c04c56eda1928eb857"} Jan 23 09:44:35 crc kubenswrapper[4982]: I0123 09:44:35.633921 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 09:44:35 crc kubenswrapper[4982]: I0123 09:44:35.666510 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.286526338 podStartE2EDuration="7.666489221s" podCreationTimestamp="2026-01-23 09:44:28 +0000 UTC" firstStartedPulling="2026-01-23 09:44:29.493019586 +0000 UTC m=+6543.973382972" lastFinishedPulling="2026-01-23 09:44:34.872982469 +0000 UTC m=+6549.353345855" observedRunningTime="2026-01-23 09:44:35.654376188 +0000 UTC m=+6550.134739574" watchObservedRunningTime="2026-01-23 09:44:35.666489221 +0000 UTC m=+6550.146852607" Jan 23 09:44:35 crc kubenswrapper[4982]: I0123 09:44:35.837312 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:44:35 crc kubenswrapper[4982]: E0123 09:44:35.840240 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:44:38 crc kubenswrapper[4982]: I0123 09:44:38.412711 4982 patch_prober.go:28] interesting pod/controller-manager-7677679df7-mgb8d container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:44:38 crc kubenswrapper[4982]: I0123 09:44:38.413057 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" podUID="cca06715-e18b-4241-a1c6-9bff76f266fe" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:44:38 crc kubenswrapper[4982]: I0123 09:44:38.455916 4982 patch_prober.go:28] interesting pod/controller-manager-7677679df7-mgb8d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:44:38 crc kubenswrapper[4982]: I0123 09:44:38.456004 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7677679df7-mgb8d" podUID="cca06715-e18b-4241-a1c6-9bff76f266fe" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:44:41 crc kubenswrapper[4982]: I0123 09:44:41.042468 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s4rj"] Jan 23 09:44:41 crc kubenswrapper[4982]: I0123 09:44:41.051806 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4s4rj"] Jan 23 09:44:41 crc kubenswrapper[4982]: I0123 09:44:41.845297 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb634be2-4ee1-47fb-aa55-8f6416c1aba8" path="/var/lib/kubelet/pods/bb634be2-4ee1-47fb-aa55-8f6416c1aba8/volumes" Jan 23 09:44:46 crc kubenswrapper[4982]: I0123 09:44:46.976112 4982 scope.go:117] "RemoveContainer" containerID="9788b1a222a7ceed2550fdc3c4f23cc671d48b0acc4e9b4dde463ced562d9aed" Jan 23 09:44:47 crc kubenswrapper[4982]: I0123 09:44:47.011550 4982 scope.go:117] "RemoveContainer" containerID="44e18a8c87e38e549763c2196f5235726f0735f9ae00c0994fb85e31cfed3354" Jan 23 09:44:47 crc kubenswrapper[4982]: I0123 09:44:47.085580 4982 scope.go:117] "RemoveContainer" containerID="880ce0a4c73b6fe9e982d827e2d02ff676011187834d8eb261179a12b14b3763" Jan 23 09:44:47 crc kubenswrapper[4982]: I0123 09:44:47.148077 4982 scope.go:117] "RemoveContainer" containerID="087df3345fdcfb198ce149d139b843a6767571de54a2a55f68c098979797d455" Jan 23 09:44:47 crc kubenswrapper[4982]: I0123 09:44:47.187542 4982 scope.go:117] "RemoveContainer" containerID="c8c20b1cbc6e8c0f850a40fd3b81f0ca162d7f8f31b5f93f36ad9fb1228130f1" Jan 23 09:44:47 crc kubenswrapper[4982]: I0123 09:44:47.254701 4982 scope.go:117] "RemoveContainer" containerID="3d70546f66f70b7063a47bbdb9c93860e2696f9f3f6ec45e1545982617cbaa53" Jan 23 09:44:47 crc kubenswrapper[4982]: I0123 09:44:47.299792 4982 scope.go:117] "RemoveContainer" containerID="70191a2199a49a9b59309d2e343d3776daaaaf7fe9436ae3a615f27818036ffb" Jan 23 09:44:50 crc kubenswrapper[4982]: E0123 09:44:50.622964 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod433688d5_2a4f_4794_b49d_e653b6093e87.slice/crio-8edf98ae4e4a6c484cd3a1655862fca76d88f9bb4ea45fc9366b09c62aef0639.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod433688d5_2a4f_4794_b49d_e653b6093e87.slice/crio-conmon-8edf98ae4e4a6c484cd3a1655862fca76d88f9bb4ea45fc9366b09c62aef0639.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:44:50 crc kubenswrapper[4982]: I0123 09:44:50.828542 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:44:50 crc kubenswrapper[4982]: E0123 09:44:50.829229 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:44:50 crc kubenswrapper[4982]: I0123 09:44:50.836872 4982 generic.go:334] "Generic (PLEG): container finished" podID="433688d5-2a4f-4794-b49d-e653b6093e87" containerID="8edf98ae4e4a6c484cd3a1655862fca76d88f9bb4ea45fc9366b09c62aef0639" exitCode=137 Jan 23 09:44:50 crc kubenswrapper[4982]: I0123 09:44:50.836934 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerDied","Data":"8edf98ae4e4a6c484cd3a1655862fca76d88f9bb4ea45fc9366b09c62aef0639"} Jan 23 09:44:50 crc kubenswrapper[4982]: I0123 09:44:50.836960 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"433688d5-2a4f-4794-b49d-e653b6093e87","Type":"ContainerDied","Data":"d9b9b060728118d4d90f1e0fd773c683552b5a52ebcb4aaf5696804f5209ce83"} Jan 23 09:44:50 crc kubenswrapper[4982]: I0123 09:44:50.836969 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b9b060728118d4d90f1e0fd773c683552b5a52ebcb4aaf5696804f5209ce83" Jan 23 09:44:50 crc kubenswrapper[4982]: I0123 09:44:50.925986 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.127853 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-scripts\") pod \"433688d5-2a4f-4794-b49d-e653b6093e87\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.127973 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-combined-ca-bundle\") pod \"433688d5-2a4f-4794-b49d-e653b6093e87\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.127999 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnl6m\" (UniqueName: \"kubernetes.io/projected/433688d5-2a4f-4794-b49d-e653b6093e87-kube-api-access-pnl6m\") pod \"433688d5-2a4f-4794-b49d-e653b6093e87\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.128152 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-config-data\") pod \"433688d5-2a4f-4794-b49d-e653b6093e87\" (UID: \"433688d5-2a4f-4794-b49d-e653b6093e87\") " Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.133809 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433688d5-2a4f-4794-b49d-e653b6093e87-kube-api-access-pnl6m" (OuterVolumeSpecName: "kube-api-access-pnl6m") pod "433688d5-2a4f-4794-b49d-e653b6093e87" (UID: "433688d5-2a4f-4794-b49d-e653b6093e87"). InnerVolumeSpecName "kube-api-access-pnl6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.133829 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-scripts" (OuterVolumeSpecName: "scripts") pod "433688d5-2a4f-4794-b49d-e653b6093e87" (UID: "433688d5-2a4f-4794-b49d-e653b6093e87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.230663 4982 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.230705 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnl6m\" (UniqueName: \"kubernetes.io/projected/433688d5-2a4f-4794-b49d-e653b6093e87-kube-api-access-pnl6m\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.252808 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "433688d5-2a4f-4794-b49d-e653b6093e87" (UID: "433688d5-2a4f-4794-b49d-e653b6093e87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.266853 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-config-data" (OuterVolumeSpecName: "config-data") pod "433688d5-2a4f-4794-b49d-e653b6093e87" (UID: "433688d5-2a4f-4794-b49d-e653b6093e87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.334329 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.334370 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433688d5-2a4f-4794-b49d-e653b6093e87-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.850607 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.889741 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.915773 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.929006 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:51 crc kubenswrapper[4982]: E0123 09:44:51.929598 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-evaluator" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.929639 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-evaluator" Jan 23 09:44:51 crc kubenswrapper[4982]: E0123 09:44:51.929672 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-api" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.929680 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-api" Jan 23 09:44:51 crc kubenswrapper[4982]: E0123 09:44:51.929706 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-listener" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.929729 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-listener" Jan 23 09:44:51 crc kubenswrapper[4982]: E0123 09:44:51.929754 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-notifier" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.929762 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-notifier" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.930041 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-notifier" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.930064 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-evaluator" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.930080 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-api" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.930105 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" containerName="aodh-listener" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.933937 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.937068 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.939581 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.939835 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.940946 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.941155 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-ljj56" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.941427 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.948131 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-config-data\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.948202 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-combined-ca-bundle\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.948303 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-public-tls-certs\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.948416 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-scripts\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.948441 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-internal-tls-certs\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:51 crc kubenswrapper[4982]: I0123 09:44:51.948477 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8p5t\" (UniqueName: \"kubernetes.io/projected/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-kube-api-access-b8p5t\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.053893 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-config-data\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.053966 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-combined-ca-bundle\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.053989 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-public-tls-certs\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.054053 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-internal-tls-certs\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.054072 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-scripts\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.054103 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8p5t\" (UniqueName: \"kubernetes.io/projected/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-kube-api-access-b8p5t\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.059260 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-internal-tls-certs\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.060077 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-scripts\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.060203 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-combined-ca-bundle\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.061130 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-public-tls-certs\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.061230 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-config-data\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.077789 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8p5t\" (UniqueName: \"kubernetes.io/projected/efd75096-1aa2-4c86-9ea8-3d6c1bca5f90-kube-api-access-b8p5t\") pod \"aodh-0\" (UID: \"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90\") " pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.257441 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.748025 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 23 09:44:52 crc kubenswrapper[4982]: I0123 09:44:52.904786 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90","Type":"ContainerStarted","Data":"c83ad9cf2e8b8e1e578974bbdff34c66e723486f6c71285738e5538bc20e4607"} Jan 23 09:44:53 crc kubenswrapper[4982]: I0123 09:44:53.841122 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433688d5-2a4f-4794-b49d-e653b6093e87" path="/var/lib/kubelet/pods/433688d5-2a4f-4794-b49d-e653b6093e87/volumes" Jan 23 09:44:53 crc kubenswrapper[4982]: I0123 09:44:53.919033 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90","Type":"ContainerStarted","Data":"a73520fae57e6b2507abdfc1d9dd9c90107b39c05006b2b0e7308b9ac7032d61"} Jan 23 09:44:54 crc kubenswrapper[4982]: I0123 09:44:54.936348 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90","Type":"ContainerStarted","Data":"61865af2ad3ba128767fb36d431d4f056843536b50c00861d050b2bc0667e542"} Jan 23 09:44:54 crc kubenswrapper[4982]: I0123 09:44:54.936897 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90","Type":"ContainerStarted","Data":"dfae9028a74417f20ecc0caa758d1bbb21fcc525961837544c6ccaabc4f167f6"} Jan 23 09:44:55 crc kubenswrapper[4982]: I0123 09:44:55.948710 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"efd75096-1aa2-4c86-9ea8-3d6c1bca5f90","Type":"ContainerStarted","Data":"e67a26643e56aaaa77081c53b0fa5f6c7c92453ac2d2e2bbccf4deab75f161a9"} Jan 23 09:44:55 crc kubenswrapper[4982]: I0123 09:44:55.975331 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.333159396 podStartE2EDuration="4.975312383s" podCreationTimestamp="2026-01-23 09:44:51 +0000 UTC" firstStartedPulling="2026-01-23 09:44:52.800788483 +0000 UTC m=+6567.281151869" lastFinishedPulling="2026-01-23 09:44:55.44294147 +0000 UTC m=+6569.923304856" observedRunningTime="2026-01-23 09:44:55.97264086 +0000 UTC m=+6570.453004256" watchObservedRunningTime="2026-01-23 09:44:55.975312383 +0000 UTC m=+6570.455675779" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.683945 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bfff5f6f9-c4g84"] Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.730050 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.733311 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bfff5f6f9-c4g84"] Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.784475 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.815485 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-openstack-cell1\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.815543 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-sb\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.815569 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-nb\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.816509 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-config\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.816550 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-dns-svc\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.816596 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvf6\" (UniqueName: \"kubernetes.io/projected/888c696f-7c40-49a0-972f-f992f0a950e0-kube-api-access-ttvf6\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.918199 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttvf6\" (UniqueName: \"kubernetes.io/projected/888c696f-7c40-49a0-972f-f992f0a950e0-kube-api-access-ttvf6\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.918366 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-openstack-cell1\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.918406 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-sb\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.918440 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-nb\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.918599 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-config\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.918796 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-dns-svc\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.919850 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-dns-svc\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.920509 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-openstack-cell1\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.920897 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-nb\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.921346 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-config\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.921511 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-sb\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:57 crc kubenswrapper[4982]: I0123 09:44:57.964613 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttvf6\" (UniqueName: \"kubernetes.io/projected/888c696f-7c40-49a0-972f-f992f0a950e0-kube-api-access-ttvf6\") pod \"dnsmasq-dns-bfff5f6f9-c4g84\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:58 crc kubenswrapper[4982]: I0123 09:44:58.129216 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:44:58 crc kubenswrapper[4982]: I0123 09:44:58.635228 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bfff5f6f9-c4g84"] Jan 23 09:44:58 crc kubenswrapper[4982]: W0123 09:44:58.668022 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod888c696f_7c40_49a0_972f_f992f0a950e0.slice/crio-37bfbbd5c00432b5d4783633329a3d79e6b2951b2e1b3f5261b35e7dd52cae1a WatchSource:0}: Error finding container 37bfbbd5c00432b5d4783633329a3d79e6b2951b2e1b3f5261b35e7dd52cae1a: Status 404 returned error can't find the container with id 37bfbbd5c00432b5d4783633329a3d79e6b2951b2e1b3f5261b35e7dd52cae1a Jan 23 09:44:58 crc kubenswrapper[4982]: I0123 09:44:58.968173 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 09:44:58 crc kubenswrapper[4982]: I0123 09:44:58.983678 4982 generic.go:334] "Generic (PLEG): container finished" podID="888c696f-7c40-49a0-972f-f992f0a950e0" containerID="b5e2f289b76433107bdbee8d586e2a026ad2ad3999c62ed11958d2c9d97c44fd" exitCode=0 Jan 23 09:44:58 crc kubenswrapper[4982]: I0123 09:44:58.983716 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" event={"ID":"888c696f-7c40-49a0-972f-f992f0a950e0","Type":"ContainerDied","Data":"b5e2f289b76433107bdbee8d586e2a026ad2ad3999c62ed11958d2c9d97c44fd"} Jan 23 09:44:58 crc kubenswrapper[4982]: I0123 09:44:58.983740 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" event={"ID":"888c696f-7c40-49a0-972f-f992f0a950e0","Type":"ContainerStarted","Data":"37bfbbd5c00432b5d4783633329a3d79e6b2951b2e1b3f5261b35e7dd52cae1a"} Jan 23 09:44:59 crc kubenswrapper[4982]: I0123 09:44:59.996448 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" event={"ID":"888c696f-7c40-49a0-972f-f992f0a950e0","Type":"ContainerStarted","Data":"0cd7b6af2759e2c91e73f09c11154be43edf015f4a8ef882e08f01dc975686ef"} Jan 23 09:44:59 crc kubenswrapper[4982]: I0123 09:44:59.996787 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.021967 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" podStartSLOduration=3.021932785 podStartE2EDuration="3.021932785s" podCreationTimestamp="2026-01-23 09:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:45:00.019759486 +0000 UTC m=+6574.500122872" watchObservedRunningTime="2026-01-23 09:45:00.021932785 +0000 UTC m=+6574.502296171" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.044593 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q42jr"] Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.058113 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-q42jr"] Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.143723 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc"] Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.145738 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.148779 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.149092 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.157959 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc"] Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.175581 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eb08e1a-eb1b-415d-9cd4-df2f75713485-secret-volume\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.175988 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb08e1a-eb1b-415d-9cd4-df2f75713485-config-volume\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.176431 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlr6\" (UniqueName: \"kubernetes.io/projected/1eb08e1a-eb1b-415d-9cd4-df2f75713485-kube-api-access-gdlr6\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.277741 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eb08e1a-eb1b-415d-9cd4-df2f75713485-secret-volume\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.277842 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb08e1a-eb1b-415d-9cd4-df2f75713485-config-volume\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.278024 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdlr6\" (UniqueName: \"kubernetes.io/projected/1eb08e1a-eb1b-415d-9cd4-df2f75713485-kube-api-access-gdlr6\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.278608 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb08e1a-eb1b-415d-9cd4-df2f75713485-config-volume\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.287407 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eb08e1a-eb1b-415d-9cd4-df2f75713485-secret-volume\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.296857 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdlr6\" (UniqueName: \"kubernetes.io/projected/1eb08e1a-eb1b-415d-9cd4-df2f75713485-kube-api-access-gdlr6\") pod \"collect-profiles-29486025-knhmc\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.498992 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:00 crc kubenswrapper[4982]: I0123 09:45:00.972176 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc"] Jan 23 09:45:01 crc kubenswrapper[4982]: I0123 09:45:01.014978 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" event={"ID":"1eb08e1a-eb1b-415d-9cd4-df2f75713485","Type":"ContainerStarted","Data":"6b85f13d806dfa2565ad23471854e6ea1354dcd050a3c6f78b4ab3ba703dc0af"} Jan 23 09:45:01 crc kubenswrapper[4982]: I0123 09:45:01.840725 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac556d5b-d326-4af7-b237-cdecbc452ce9" path="/var/lib/kubelet/pods/ac556d5b-d326-4af7-b237-cdecbc452ce9/volumes" Jan 23 09:45:02 crc kubenswrapper[4982]: I0123 09:45:02.027332 4982 generic.go:334] "Generic (PLEG): container finished" podID="1eb08e1a-eb1b-415d-9cd4-df2f75713485" containerID="e58a18f2e521c71027bef91bd9a715ac79137d39f7a4e5994078044c0c38ba0c" exitCode=0 Jan 23 09:45:02 crc kubenswrapper[4982]: I0123 09:45:02.028038 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" event={"ID":"1eb08e1a-eb1b-415d-9cd4-df2f75713485","Type":"ContainerDied","Data":"e58a18f2e521c71027bef91bd9a715ac79137d39f7a4e5994078044c0c38ba0c"} Jan 23 09:45:02 crc kubenswrapper[4982]: I0123 09:45:02.046507 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zp2tp"] Jan 23 09:45:02 crc kubenswrapper[4982]: I0123 09:45:02.055888 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zp2tp"] Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.408882 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.458908 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eb08e1a-eb1b-415d-9cd4-df2f75713485-secret-volume\") pod \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.459045 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdlr6\" (UniqueName: \"kubernetes.io/projected/1eb08e1a-eb1b-415d-9cd4-df2f75713485-kube-api-access-gdlr6\") pod \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.459201 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb08e1a-eb1b-415d-9cd4-df2f75713485-config-volume\") pod \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\" (UID: \"1eb08e1a-eb1b-415d-9cd4-df2f75713485\") " Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.459872 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1eb08e1a-eb1b-415d-9cd4-df2f75713485-config-volume" (OuterVolumeSpecName: "config-volume") pod "1eb08e1a-eb1b-415d-9cd4-df2f75713485" (UID: "1eb08e1a-eb1b-415d-9cd4-df2f75713485"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.464587 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb08e1a-eb1b-415d-9cd4-df2f75713485-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1eb08e1a-eb1b-415d-9cd4-df2f75713485" (UID: "1eb08e1a-eb1b-415d-9cd4-df2f75713485"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.465099 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb08e1a-eb1b-415d-9cd4-df2f75713485-kube-api-access-gdlr6" (OuterVolumeSpecName: "kube-api-access-gdlr6") pod "1eb08e1a-eb1b-415d-9cd4-df2f75713485" (UID: "1eb08e1a-eb1b-415d-9cd4-df2f75713485"). InnerVolumeSpecName "kube-api-access-gdlr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.560785 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eb08e1a-eb1b-415d-9cd4-df2f75713485-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.560817 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1eb08e1a-eb1b-415d-9cd4-df2f75713485-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.560827 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdlr6\" (UniqueName: \"kubernetes.io/projected/1eb08e1a-eb1b-415d-9cd4-df2f75713485-kube-api-access-gdlr6\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:03 crc kubenswrapper[4982]: I0123 09:45:03.839142 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5f2d4c-e155-4920-bce0-17b75d698fae" path="/var/lib/kubelet/pods/7c5f2d4c-e155-4920-bce0-17b75d698fae/volumes" Jan 23 09:45:04 crc kubenswrapper[4982]: I0123 09:45:04.049155 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" event={"ID":"1eb08e1a-eb1b-415d-9cd4-df2f75713485","Type":"ContainerDied","Data":"6b85f13d806dfa2565ad23471854e6ea1354dcd050a3c6f78b4ab3ba703dc0af"} Jan 23 09:45:04 crc kubenswrapper[4982]: I0123 09:45:04.049546 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b85f13d806dfa2565ad23471854e6ea1354dcd050a3c6f78b4ab3ba703dc0af" Jan 23 09:45:04 crc kubenswrapper[4982]: I0123 09:45:04.049252 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486025-knhmc" Jan 23 09:45:04 crc kubenswrapper[4982]: I0123 09:45:04.498289 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k"] Jan 23 09:45:04 crc kubenswrapper[4982]: I0123 09:45:04.509266 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rft2k"] Jan 23 09:45:05 crc kubenswrapper[4982]: I0123 09:45:05.839501 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:45:05 crc kubenswrapper[4982]: E0123 09:45:05.840241 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:45:05 crc kubenswrapper[4982]: I0123 09:45:05.844600 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4" path="/var/lib/kubelet/pods/6a1dcc31-fc9e-42c9-bb94-cde5c66eadb4/volumes" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.130842 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.200357 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd69b9bd9-qm8zr"] Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.200670 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" podUID="f44af539-af03-49f7-afad-cf5f221f620d" containerName="dnsmasq-dns" containerID="cri-o://20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356" gracePeriod=10 Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.367765 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-776dc944c9-lsdm4"] Jan 23 09:45:08 crc kubenswrapper[4982]: E0123 09:45:08.368656 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb08e1a-eb1b-415d-9cd4-df2f75713485" containerName="collect-profiles" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.368676 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb08e1a-eb1b-415d-9cd4-df2f75713485" containerName="collect-profiles" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.368944 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb08e1a-eb1b-415d-9cd4-df2f75713485" containerName="collect-profiles" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.377029 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.401079 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-776dc944c9-lsdm4"] Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.489246 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-ovsdbserver-sb\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.489301 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-ovsdbserver-nb\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.489347 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-config\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.489373 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-dns-svc\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.489403 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-openstack-cell1\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.489493 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbl8h\" (UniqueName: \"kubernetes.io/projected/fa0a2f02-eb37-430e-8700-096aa7f44eb8-kube-api-access-jbl8h\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.592096 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-ovsdbserver-sb\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.592144 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-ovsdbserver-nb\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.592182 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-config\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.592212 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-dns-svc\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.592245 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-openstack-cell1\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.592353 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbl8h\" (UniqueName: \"kubernetes.io/projected/fa0a2f02-eb37-430e-8700-096aa7f44eb8-kube-api-access-jbl8h\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.593279 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-config\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.593372 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-dns-svc\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.593891 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-ovsdbserver-sb\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.593928 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-openstack-cell1\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.594484 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa0a2f02-eb37-430e-8700-096aa7f44eb8-ovsdbserver-nb\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.640172 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbl8h\" (UniqueName: \"kubernetes.io/projected/fa0a2f02-eb37-430e-8700-096aa7f44eb8-kube-api-access-jbl8h\") pod \"dnsmasq-dns-776dc944c9-lsdm4\" (UID: \"fa0a2f02-eb37-430e-8700-096aa7f44eb8\") " pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.753031 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:08 crc kubenswrapper[4982]: I0123 09:45:08.950847 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.103599 4982 generic.go:334] "Generic (PLEG): container finished" podID="f44af539-af03-49f7-afad-cf5f221f620d" containerID="20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356" exitCode=0 Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.103670 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" event={"ID":"f44af539-af03-49f7-afad-cf5f221f620d","Type":"ContainerDied","Data":"20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356"} Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.103700 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" event={"ID":"f44af539-af03-49f7-afad-cf5f221f620d","Type":"ContainerDied","Data":"6600155272b628898e015fdfa42893a3fa91eeef58e349c4edf0cd48a219cea6"} Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.103717 4982 scope.go:117] "RemoveContainer" containerID="20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.103868 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd69b9bd9-qm8zr" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.123543 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmprd\" (UniqueName: \"kubernetes.io/projected/f44af539-af03-49f7-afad-cf5f221f620d-kube-api-access-lmprd\") pod \"f44af539-af03-49f7-afad-cf5f221f620d\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.124025 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-sb\") pod \"f44af539-af03-49f7-afad-cf5f221f620d\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.124068 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-dns-svc\") pod \"f44af539-af03-49f7-afad-cf5f221f620d\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.124169 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-nb\") pod \"f44af539-af03-49f7-afad-cf5f221f620d\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.124249 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-config\") pod \"f44af539-af03-49f7-afad-cf5f221f620d\" (UID: \"f44af539-af03-49f7-afad-cf5f221f620d\") " Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.141990 4982 scope.go:117] "RemoveContainer" containerID="d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.142282 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44af539-af03-49f7-afad-cf5f221f620d-kube-api-access-lmprd" (OuterVolumeSpecName: "kube-api-access-lmprd") pod "f44af539-af03-49f7-afad-cf5f221f620d" (UID: "f44af539-af03-49f7-afad-cf5f221f620d"). InnerVolumeSpecName "kube-api-access-lmprd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.188641 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-config" (OuterVolumeSpecName: "config") pod "f44af539-af03-49f7-afad-cf5f221f620d" (UID: "f44af539-af03-49f7-afad-cf5f221f620d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.190387 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f44af539-af03-49f7-afad-cf5f221f620d" (UID: "f44af539-af03-49f7-afad-cf5f221f620d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.190789 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f44af539-af03-49f7-afad-cf5f221f620d" (UID: "f44af539-af03-49f7-afad-cf5f221f620d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.203253 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f44af539-af03-49f7-afad-cf5f221f620d" (UID: "f44af539-af03-49f7-afad-cf5f221f620d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.215060 4982 scope.go:117] "RemoveContainer" containerID="20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356" Jan 23 09:45:09 crc kubenswrapper[4982]: E0123 09:45:09.215521 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356\": container with ID starting with 20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356 not found: ID does not exist" containerID="20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.215654 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356"} err="failed to get container status \"20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356\": rpc error: code = NotFound desc = could not find container \"20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356\": container with ID starting with 20c36c2e71d52592710dcd5a7ddc31484615ccac99cfba5e86a4e13353694356 not found: ID does not exist" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.215690 4982 scope.go:117] "RemoveContainer" containerID="d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9" Jan 23 09:45:09 crc kubenswrapper[4982]: E0123 09:45:09.216250 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9\": container with ID starting with d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9 not found: ID does not exist" containerID="d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.216286 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9"} err="failed to get container status \"d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9\": rpc error: code = NotFound desc = could not find container \"d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9\": container with ID starting with d14ecf427a9d8165fc9ba9ef3a39ec2cf1323b1b9dae3a934920950a0bcd0ab9 not found: ID does not exist" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.227032 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.227065 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.227075 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmprd\" (UniqueName: \"kubernetes.io/projected/f44af539-af03-49f7-afad-cf5f221f620d-kube-api-access-lmprd\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.227085 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.227095 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44af539-af03-49f7-afad-cf5f221f620d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:09 crc kubenswrapper[4982]: W0123 09:45:09.290981 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa0a2f02_eb37_430e_8700_096aa7f44eb8.slice/crio-eabfa1d0e06e317e57b49d422d71537822f2cc34e84ef34572874a9bc3cfc02d WatchSource:0}: Error finding container eabfa1d0e06e317e57b49d422d71537822f2cc34e84ef34572874a9bc3cfc02d: Status 404 returned error can't find the container with id eabfa1d0e06e317e57b49d422d71537822f2cc34e84ef34572874a9bc3cfc02d Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.296577 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-776dc944c9-lsdm4"] Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.496175 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd69b9bd9-qm8zr"] Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.507225 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd69b9bd9-qm8zr"] Jan 23 09:45:09 crc kubenswrapper[4982]: I0123 09:45:09.842461 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44af539-af03-49f7-afad-cf5f221f620d" path="/var/lib/kubelet/pods/f44af539-af03-49f7-afad-cf5f221f620d/volumes" Jan 23 09:45:10 crc kubenswrapper[4982]: I0123 09:45:10.119675 4982 generic.go:334] "Generic (PLEG): container finished" podID="fa0a2f02-eb37-430e-8700-096aa7f44eb8" containerID="ddce9e6743c1782df919b654e902d032cd19a44b2e8082bca110dce8c0f01e38" exitCode=0 Jan 23 09:45:10 crc kubenswrapper[4982]: I0123 09:45:10.119896 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" event={"ID":"fa0a2f02-eb37-430e-8700-096aa7f44eb8","Type":"ContainerDied","Data":"ddce9e6743c1782df919b654e902d032cd19a44b2e8082bca110dce8c0f01e38"} Jan 23 09:45:10 crc kubenswrapper[4982]: I0123 09:45:10.120004 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" event={"ID":"fa0a2f02-eb37-430e-8700-096aa7f44eb8","Type":"ContainerStarted","Data":"eabfa1d0e06e317e57b49d422d71537822f2cc34e84ef34572874a9bc3cfc02d"} Jan 23 09:45:11 crc kubenswrapper[4982]: I0123 09:45:11.150502 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" event={"ID":"fa0a2f02-eb37-430e-8700-096aa7f44eb8","Type":"ContainerStarted","Data":"d650bf20658c6d81e3056dbf8e8139a38ed39f557c109b0c31e7164069c9f80d"} Jan 23 09:45:11 crc kubenswrapper[4982]: I0123 09:45:11.152499 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:11 crc kubenswrapper[4982]: I0123 09:45:11.181759 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" podStartSLOduration=3.181737773 podStartE2EDuration="3.181737773s" podCreationTimestamp="2026-01-23 09:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:45:11.173586629 +0000 UTC m=+6585.653950005" watchObservedRunningTime="2026-01-23 09:45:11.181737773 +0000 UTC m=+6585.662101159" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.172191 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c8dj5"] Jan 23 09:45:13 crc kubenswrapper[4982]: E0123 09:45:13.173109 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44af539-af03-49f7-afad-cf5f221f620d" containerName="init" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.173124 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44af539-af03-49f7-afad-cf5f221f620d" containerName="init" Jan 23 09:45:13 crc kubenswrapper[4982]: E0123 09:45:13.173139 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44af539-af03-49f7-afad-cf5f221f620d" containerName="dnsmasq-dns" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.173146 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44af539-af03-49f7-afad-cf5f221f620d" containerName="dnsmasq-dns" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.173366 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44af539-af03-49f7-afad-cf5f221f620d" containerName="dnsmasq-dns" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.175362 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.189299 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8dj5"] Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.221505 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs42z\" (UniqueName: \"kubernetes.io/projected/a9878e34-dc6a-4a02-af3c-704adcba4b3f-kube-api-access-rs42z\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.221911 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-utilities\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.222128 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-catalog-content\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.324253 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs42z\" (UniqueName: \"kubernetes.io/projected/a9878e34-dc6a-4a02-af3c-704adcba4b3f-kube-api-access-rs42z\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.324351 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-utilities\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.324705 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-catalog-content\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.325222 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-catalog-content\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.325284 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-utilities\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.348184 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs42z\" (UniqueName: \"kubernetes.io/projected/a9878e34-dc6a-4a02-af3c-704adcba4b3f-kube-api-access-rs42z\") pod \"certified-operators-c8dj5\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:13 crc kubenswrapper[4982]: I0123 09:45:13.501380 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:14 crc kubenswrapper[4982]: W0123 09:45:14.034741 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9878e34_dc6a_4a02_af3c_704adcba4b3f.slice/crio-cf296f25340f7d943fcdf16576288626a84ba1e07436fb3eb21f4ac275d29508 WatchSource:0}: Error finding container cf296f25340f7d943fcdf16576288626a84ba1e07436fb3eb21f4ac275d29508: Status 404 returned error can't find the container with id cf296f25340f7d943fcdf16576288626a84ba1e07436fb3eb21f4ac275d29508 Jan 23 09:45:14 crc kubenswrapper[4982]: I0123 09:45:14.041751 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c8dj5"] Jan 23 09:45:14 crc kubenswrapper[4982]: I0123 09:45:14.190095 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8dj5" event={"ID":"a9878e34-dc6a-4a02-af3c-704adcba4b3f","Type":"ContainerStarted","Data":"cf296f25340f7d943fcdf16576288626a84ba1e07436fb3eb21f4ac275d29508"} Jan 23 09:45:15 crc kubenswrapper[4982]: I0123 09:45:15.205908 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerID="c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5" exitCode=0 Jan 23 09:45:15 crc kubenswrapper[4982]: I0123 09:45:15.205969 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8dj5" event={"ID":"a9878e34-dc6a-4a02-af3c-704adcba4b3f","Type":"ContainerDied","Data":"c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5"} Jan 23 09:45:16 crc kubenswrapper[4982]: I0123 09:45:16.219658 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8dj5" event={"ID":"a9878e34-dc6a-4a02-af3c-704adcba4b3f","Type":"ContainerStarted","Data":"1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b"} Jan 23 09:45:18 crc kubenswrapper[4982]: I0123 09:45:18.240407 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerID="1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b" exitCode=0 Jan 23 09:45:18 crc kubenswrapper[4982]: I0123 09:45:18.240489 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8dj5" event={"ID":"a9878e34-dc6a-4a02-af3c-704adcba4b3f","Type":"ContainerDied","Data":"1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b"} Jan 23 09:45:18 crc kubenswrapper[4982]: I0123 09:45:18.754909 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-776dc944c9-lsdm4" Jan 23 09:45:18 crc kubenswrapper[4982]: I0123 09:45:18.827430 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bfff5f6f9-c4g84"] Jan 23 09:45:18 crc kubenswrapper[4982]: I0123 09:45:18.828841 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" podUID="888c696f-7c40-49a0-972f-f992f0a950e0" containerName="dnsmasq-dns" containerID="cri-o://0cd7b6af2759e2c91e73f09c11154be43edf015f4a8ef882e08f01dc975686ef" gracePeriod=10 Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.268485 4982 generic.go:334] "Generic (PLEG): container finished" podID="888c696f-7c40-49a0-972f-f992f0a950e0" containerID="0cd7b6af2759e2c91e73f09c11154be43edf015f4a8ef882e08f01dc975686ef" exitCode=0 Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.268832 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" event={"ID":"888c696f-7c40-49a0-972f-f992f0a950e0","Type":"ContainerDied","Data":"0cd7b6af2759e2c91e73f09c11154be43edf015f4a8ef882e08f01dc975686ef"} Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.271582 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8dj5" event={"ID":"a9878e34-dc6a-4a02-af3c-704adcba4b3f","Type":"ContainerStarted","Data":"5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5"} Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.395009 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.417508 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c8dj5" podStartSLOduration=3.005088606 podStartE2EDuration="6.417488865s" podCreationTimestamp="2026-01-23 09:45:13 +0000 UTC" firstStartedPulling="2026-01-23 09:45:15.214469153 +0000 UTC m=+6589.694832539" lastFinishedPulling="2026-01-23 09:45:18.626869422 +0000 UTC m=+6593.107232798" observedRunningTime="2026-01-23 09:45:19.302218056 +0000 UTC m=+6593.782581452" watchObservedRunningTime="2026-01-23 09:45:19.417488865 +0000 UTC m=+6593.897852251" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.474146 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-config\") pod \"888c696f-7c40-49a0-972f-f992f0a950e0\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.474948 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-openstack-cell1\") pod \"888c696f-7c40-49a0-972f-f992f0a950e0\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.475056 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-nb\") pod \"888c696f-7c40-49a0-972f-f992f0a950e0\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.475173 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-dns-svc\") pod \"888c696f-7c40-49a0-972f-f992f0a950e0\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.475331 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttvf6\" (UniqueName: \"kubernetes.io/projected/888c696f-7c40-49a0-972f-f992f0a950e0-kube-api-access-ttvf6\") pod \"888c696f-7c40-49a0-972f-f992f0a950e0\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.475431 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-sb\") pod \"888c696f-7c40-49a0-972f-f992f0a950e0\" (UID: \"888c696f-7c40-49a0-972f-f992f0a950e0\") " Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.481209 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888c696f-7c40-49a0-972f-f992f0a950e0-kube-api-access-ttvf6" (OuterVolumeSpecName: "kube-api-access-ttvf6") pod "888c696f-7c40-49a0-972f-f992f0a950e0" (UID: "888c696f-7c40-49a0-972f-f992f0a950e0"). InnerVolumeSpecName "kube-api-access-ttvf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.535707 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "888c696f-7c40-49a0-972f-f992f0a950e0" (UID: "888c696f-7c40-49a0-972f-f992f0a950e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.536875 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "888c696f-7c40-49a0-972f-f992f0a950e0" (UID: "888c696f-7c40-49a0-972f-f992f0a950e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.538681 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "888c696f-7c40-49a0-972f-f992f0a950e0" (UID: "888c696f-7c40-49a0-972f-f992f0a950e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.539896 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "888c696f-7c40-49a0-972f-f992f0a950e0" (UID: "888c696f-7c40-49a0-972f-f992f0a950e0"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.544236 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-config" (OuterVolumeSpecName: "config") pod "888c696f-7c40-49a0-972f-f992f0a950e0" (UID: "888c696f-7c40-49a0-972f-f992f0a950e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.577571 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.577610 4982 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.577635 4982 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.577649 4982 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.577658 4982 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/888c696f-7c40-49a0-972f-f992f0a950e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:19 crc kubenswrapper[4982]: I0123 09:45:19.577667 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttvf6\" (UniqueName: \"kubernetes.io/projected/888c696f-7c40-49a0-972f-f992f0a950e0-kube-api-access-ttvf6\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:20 crc kubenswrapper[4982]: I0123 09:45:20.283766 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" event={"ID":"888c696f-7c40-49a0-972f-f992f0a950e0","Type":"ContainerDied","Data":"37bfbbd5c00432b5d4783633329a3d79e6b2951b2e1b3f5261b35e7dd52cae1a"} Jan 23 09:45:20 crc kubenswrapper[4982]: I0123 09:45:20.283831 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bfff5f6f9-c4g84" Jan 23 09:45:20 crc kubenswrapper[4982]: I0123 09:45:20.283844 4982 scope.go:117] "RemoveContainer" containerID="0cd7b6af2759e2c91e73f09c11154be43edf015f4a8ef882e08f01dc975686ef" Jan 23 09:45:20 crc kubenswrapper[4982]: I0123 09:45:20.315483 4982 scope.go:117] "RemoveContainer" containerID="b5e2f289b76433107bdbee8d586e2a026ad2ad3999c62ed11958d2c9d97c44fd" Jan 23 09:45:20 crc kubenswrapper[4982]: I0123 09:45:20.317585 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bfff5f6f9-c4g84"] Jan 23 09:45:20 crc kubenswrapper[4982]: I0123 09:45:20.328820 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bfff5f6f9-c4g84"] Jan 23 09:45:20 crc kubenswrapper[4982]: I0123 09:45:20.827951 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:45:20 crc kubenswrapper[4982]: E0123 09:45:20.828616 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:45:21 crc kubenswrapper[4982]: I0123 09:45:21.030024 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5mm5"] Jan 23 09:45:21 crc kubenswrapper[4982]: I0123 09:45:21.047468 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5mm5"] Jan 23 09:45:21 crc kubenswrapper[4982]: I0123 09:45:21.847601 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888c696f-7c40-49a0-972f-f992f0a950e0" path="/var/lib/kubelet/pods/888c696f-7c40-49a0-972f-f992f0a950e0/volumes" Jan 23 09:45:21 crc kubenswrapper[4982]: I0123 09:45:21.849323 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9678e71-330e-4862-a9a6-c4280299fb1b" path="/var/lib/kubelet/pods/a9678e71-330e-4862-a9a6-c4280299fb1b/volumes" Jan 23 09:45:23 crc kubenswrapper[4982]: I0123 09:45:23.502021 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:23 crc kubenswrapper[4982]: I0123 09:45:23.502350 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:23 crc kubenswrapper[4982]: I0123 09:45:23.559425 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:24 crc kubenswrapper[4982]: I0123 09:45:24.366816 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:24 crc kubenswrapper[4982]: I0123 09:45:24.416689 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8dj5"] Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.341354 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c8dj5" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="registry-server" containerID="cri-o://5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5" gracePeriod=2 Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.826474 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.941539 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-catalog-content\") pod \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.941973 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs42z\" (UniqueName: \"kubernetes.io/projected/a9878e34-dc6a-4a02-af3c-704adcba4b3f-kube-api-access-rs42z\") pod \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.942799 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-utilities\") pod \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\" (UID: \"a9878e34-dc6a-4a02-af3c-704adcba4b3f\") " Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.943394 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-utilities" (OuterVolumeSpecName: "utilities") pod "a9878e34-dc6a-4a02-af3c-704adcba4b3f" (UID: "a9878e34-dc6a-4a02-af3c-704adcba4b3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.946008 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:26 crc kubenswrapper[4982]: I0123 09:45:26.958955 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9878e34-dc6a-4a02-af3c-704adcba4b3f-kube-api-access-rs42z" (OuterVolumeSpecName: "kube-api-access-rs42z") pod "a9878e34-dc6a-4a02-af3c-704adcba4b3f" (UID: "a9878e34-dc6a-4a02-af3c-704adcba4b3f"). InnerVolumeSpecName "kube-api-access-rs42z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.004215 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9878e34-dc6a-4a02-af3c-704adcba4b3f" (UID: "a9878e34-dc6a-4a02-af3c-704adcba4b3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.048021 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9878e34-dc6a-4a02-af3c-704adcba4b3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.048319 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs42z\" (UniqueName: \"kubernetes.io/projected/a9878e34-dc6a-4a02-af3c-704adcba4b3f-kube-api-access-rs42z\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.352029 4982 generic.go:334] "Generic (PLEG): container finished" podID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerID="5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5" exitCode=0 Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.352078 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8dj5" event={"ID":"a9878e34-dc6a-4a02-af3c-704adcba4b3f","Type":"ContainerDied","Data":"5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5"} Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.352114 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c8dj5" event={"ID":"a9878e34-dc6a-4a02-af3c-704adcba4b3f","Type":"ContainerDied","Data":"cf296f25340f7d943fcdf16576288626a84ba1e07436fb3eb21f4ac275d29508"} Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.352135 4982 scope.go:117] "RemoveContainer" containerID="5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.352721 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c8dj5" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.388591 4982 scope.go:117] "RemoveContainer" containerID="1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.398830 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c8dj5"] Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.409169 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c8dj5"] Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.426439 4982 scope.go:117] "RemoveContainer" containerID="c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.473019 4982 scope.go:117] "RemoveContainer" containerID="5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5" Jan 23 09:45:27 crc kubenswrapper[4982]: E0123 09:45:27.473422 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5\": container with ID starting with 5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5 not found: ID does not exist" containerID="5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.473460 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5"} err="failed to get container status \"5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5\": rpc error: code = NotFound desc = could not find container \"5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5\": container with ID starting with 5ea76ae8a78353af3df539d0405d5760b2cdcdb917dff7eae1503a7dae3eedf5 not found: ID does not exist" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.473485 4982 scope.go:117] "RemoveContainer" containerID="1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b" Jan 23 09:45:27 crc kubenswrapper[4982]: E0123 09:45:27.474000 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b\": container with ID starting with 1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b not found: ID does not exist" containerID="1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.474033 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b"} err="failed to get container status \"1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b\": rpc error: code = NotFound desc = could not find container \"1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b\": container with ID starting with 1b71f00bbaaec07edfa270be851018abfac0b4731cf0e400e1623edbb736c81b not found: ID does not exist" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.474055 4982 scope.go:117] "RemoveContainer" containerID="c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5" Jan 23 09:45:27 crc kubenswrapper[4982]: E0123 09:45:27.474449 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5\": container with ID starting with c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5 not found: ID does not exist" containerID="c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.474475 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5"} err="failed to get container status \"c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5\": rpc error: code = NotFound desc = could not find container \"c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5\": container with ID starting with c67d0f1d7a2801288d88116f6ee2ed31323d62f17514523ac34d3e9cc60fcae5 not found: ID does not exist" Jan 23 09:45:27 crc kubenswrapper[4982]: I0123 09:45:27.842493 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" path="/var/lib/kubelet/pods/a9878e34-dc6a-4a02-af3c-704adcba4b3f/volumes" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.247804 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg"] Jan 23 09:45:29 crc kubenswrapper[4982]: E0123 09:45:29.248606 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="extract-utilities" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.248621 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="extract-utilities" Jan 23 09:45:29 crc kubenswrapper[4982]: E0123 09:45:29.248649 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="registry-server" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.248655 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="registry-server" Jan 23 09:45:29 crc kubenswrapper[4982]: E0123 09:45:29.248682 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c696f-7c40-49a0-972f-f992f0a950e0" containerName="dnsmasq-dns" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.248688 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c696f-7c40-49a0-972f-f992f0a950e0" containerName="dnsmasq-dns" Jan 23 09:45:29 crc kubenswrapper[4982]: E0123 09:45:29.248722 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c696f-7c40-49a0-972f-f992f0a950e0" containerName="init" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.248730 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c696f-7c40-49a0-972f-f992f0a950e0" containerName="init" Jan 23 09:45:29 crc kubenswrapper[4982]: E0123 09:45:29.248746 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="extract-content" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.248754 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="extract-content" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.249011 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9878e34-dc6a-4a02-af3c-704adcba4b3f" containerName="registry-server" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.249026 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="888c696f-7c40-49a0-972f-f992f0a950e0" containerName="dnsmasq-dns" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.249850 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.251912 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.253821 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.253961 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.255940 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s9mlf" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.265520 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg"] Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.398383 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.398503 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.398583 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjtj\" (UniqueName: \"kubernetes.io/projected/1d05323a-35b9-413c-a48f-6ffa11d34423-kube-api-access-8jjtj\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.398854 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.501569 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjtj\" (UniqueName: \"kubernetes.io/projected/1d05323a-35b9-413c-a48f-6ffa11d34423-kube-api-access-8jjtj\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.502071 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.502262 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.502419 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.509672 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.510657 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.512682 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-ssh-key-openstack-cell1\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.522987 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjtj\" (UniqueName: \"kubernetes.io/projected/1d05323a-35b9-413c-a48f-6ffa11d34423-kube-api-access-8jjtj\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:29 crc kubenswrapper[4982]: I0123 09:45:29.582245 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:30 crc kubenswrapper[4982]: I0123 09:45:30.161239 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg"] Jan 23 09:45:30 crc kubenswrapper[4982]: I0123 09:45:30.391217 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" event={"ID":"1d05323a-35b9-413c-a48f-6ffa11d34423","Type":"ContainerStarted","Data":"38f0b3361fb732f04f728895ce4196a04ff1ac2c7e62e11828db2292a6b1980f"} Jan 23 09:45:31 crc kubenswrapper[4982]: I0123 09:45:31.827990 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:45:31 crc kubenswrapper[4982]: E0123 09:45:31.828536 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:45:39 crc kubenswrapper[4982]: I0123 09:45:39.490937 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" event={"ID":"1d05323a-35b9-413c-a48f-6ffa11d34423","Type":"ContainerStarted","Data":"d3a5a932df58d6080d7af1732e3717d4b2e30b5c1b0f8ed41f2a82c63a598920"} Jan 23 09:45:39 crc kubenswrapper[4982]: I0123 09:45:39.516705 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" podStartSLOduration=1.6103390480000002 podStartE2EDuration="10.516671852s" podCreationTimestamp="2026-01-23 09:45:29 +0000 UTC" firstStartedPulling="2026-01-23 09:45:30.16671744 +0000 UTC m=+6604.647080826" lastFinishedPulling="2026-01-23 09:45:39.073050244 +0000 UTC m=+6613.553413630" observedRunningTime="2026-01-23 09:45:39.50558386 +0000 UTC m=+6613.985947246" watchObservedRunningTime="2026-01-23 09:45:39.516671852 +0000 UTC m=+6613.997035238" Jan 23 09:45:46 crc kubenswrapper[4982]: I0123 09:45:46.828019 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:45:46 crc kubenswrapper[4982]: E0123 09:45:46.829258 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:45:47 crc kubenswrapper[4982]: I0123 09:45:47.531537 4982 scope.go:117] "RemoveContainer" containerID="439a8ba3ae7930dfbb339f69f915700daee3f9cb4a4bf728edb5d3556463d003" Jan 23 09:45:47 crc kubenswrapper[4982]: I0123 09:45:47.566127 4982 scope.go:117] "RemoveContainer" containerID="d2c6d4d10787121beff78454c0f54383da5e697a04b7d9e1ae30f1c35fa80dd2" Jan 23 09:45:47 crc kubenswrapper[4982]: I0123 09:45:47.627884 4982 scope.go:117] "RemoveContainer" containerID="c7815a8ad19d285d91ab37c1bfc6c55f94a202430afe9edf5a62f02503d68606" Jan 23 09:45:47 crc kubenswrapper[4982]: I0123 09:45:47.685265 4982 scope.go:117] "RemoveContainer" containerID="fd0fb3cd364215840f646ccf1432034acc8ff7c1f1aa25346ba8ba452889cfc1" Jan 23 09:45:52 crc kubenswrapper[4982]: E0123 09:45:52.539129 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d05323a_35b9_413c_a48f_6ffa11d34423.slice/crio-conmon-d3a5a932df58d6080d7af1732e3717d4b2e30b5c1b0f8ed41f2a82c63a598920.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:45:52 crc kubenswrapper[4982]: I0123 09:45:52.626183 4982 generic.go:334] "Generic (PLEG): container finished" podID="1d05323a-35b9-413c-a48f-6ffa11d34423" containerID="d3a5a932df58d6080d7af1732e3717d4b2e30b5c1b0f8ed41f2a82c63a598920" exitCode=0 Jan 23 09:45:52 crc kubenswrapper[4982]: I0123 09:45:52.626238 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" event={"ID":"1d05323a-35b9-413c-a48f-6ffa11d34423","Type":"ContainerDied","Data":"d3a5a932df58d6080d7af1732e3717d4b2e30b5c1b0f8ed41f2a82c63a598920"} Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.130742 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.206098 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jjtj\" (UniqueName: \"kubernetes.io/projected/1d05323a-35b9-413c-a48f-6ffa11d34423-kube-api-access-8jjtj\") pod \"1d05323a-35b9-413c-a48f-6ffa11d34423\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.206301 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-pre-adoption-validation-combined-ca-bundle\") pod \"1d05323a-35b9-413c-a48f-6ffa11d34423\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.206366 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-inventory\") pod \"1d05323a-35b9-413c-a48f-6ffa11d34423\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.206456 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-ssh-key-openstack-cell1\") pod \"1d05323a-35b9-413c-a48f-6ffa11d34423\" (UID: \"1d05323a-35b9-413c-a48f-6ffa11d34423\") " Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.211967 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "1d05323a-35b9-413c-a48f-6ffa11d34423" (UID: "1d05323a-35b9-413c-a48f-6ffa11d34423"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.212549 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d05323a-35b9-413c-a48f-6ffa11d34423-kube-api-access-8jjtj" (OuterVolumeSpecName: "kube-api-access-8jjtj") pod "1d05323a-35b9-413c-a48f-6ffa11d34423" (UID: "1d05323a-35b9-413c-a48f-6ffa11d34423"). InnerVolumeSpecName "kube-api-access-8jjtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.239028 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-inventory" (OuterVolumeSpecName: "inventory") pod "1d05323a-35b9-413c-a48f-6ffa11d34423" (UID: "1d05323a-35b9-413c-a48f-6ffa11d34423"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.241219 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "1d05323a-35b9-413c-a48f-6ffa11d34423" (UID: "1d05323a-35b9-413c-a48f-6ffa11d34423"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.309760 4982 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.309806 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.309822 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/1d05323a-35b9-413c-a48f-6ffa11d34423-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.309835 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jjtj\" (UniqueName: \"kubernetes.io/projected/1d05323a-35b9-413c-a48f-6ffa11d34423-kube-api-access-8jjtj\") on node \"crc\" DevicePath \"\"" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.663935 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" event={"ID":"1d05323a-35b9-413c-a48f-6ffa11d34423","Type":"ContainerDied","Data":"38f0b3361fb732f04f728895ce4196a04ff1ac2c7e62e11828db2292a6b1980f"} Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.663975 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f0b3361fb732f04f728895ce4196a04ff1ac2c7e62e11828db2292a6b1980f" Jan 23 09:45:54 crc kubenswrapper[4982]: I0123 09:45:54.664009 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg" Jan 23 09:45:59 crc kubenswrapper[4982]: I0123 09:45:59.828177 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:45:59 crc kubenswrapper[4982]: E0123 09:45:59.828735 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.118593 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6"] Jan 23 09:46:02 crc kubenswrapper[4982]: E0123 09:46:02.121097 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d05323a-35b9-413c-a48f-6ffa11d34423" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.121121 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d05323a-35b9-413c-a48f-6ffa11d34423" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.121538 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d05323a-35b9-413c-a48f-6ffa11d34423" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.122683 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.126309 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.126569 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.126879 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-s9mlf" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.130070 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.137065 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6"] Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.224240 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n297h\" (UniqueName: \"kubernetes.io/projected/8f743137-4885-4abc-b136-19b84d136f27-kube-api-access-n297h\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.224307 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.224559 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.224634 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.327242 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n297h\" (UniqueName: \"kubernetes.io/projected/8f743137-4885-4abc-b136-19b84d136f27-kube-api-access-n297h\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.327317 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.327421 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.327469 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.333655 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-ssh-key-openstack-cell1\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.334621 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.341931 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.345005 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n297h\" (UniqueName: \"kubernetes.io/projected/8f743137-4885-4abc-b136-19b84d136f27-kube-api-access-n297h\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:02 crc kubenswrapper[4982]: I0123 09:46:02.458095 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:46:03 crc kubenswrapper[4982]: I0123 09:46:03.080293 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6"] Jan 23 09:46:03 crc kubenswrapper[4982]: I0123 09:46:03.773478 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" event={"ID":"8f743137-4885-4abc-b136-19b84d136f27","Type":"ContainerStarted","Data":"07fd7d3d3f1b1c38bc9699e33a94ba22b4e639c6beb717b64fe1bf10485bab26"} Jan 23 09:46:04 crc kubenswrapper[4982]: I0123 09:46:04.793808 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" event={"ID":"8f743137-4885-4abc-b136-19b84d136f27","Type":"ContainerStarted","Data":"56dc999fee95c3b1a9c14547a592a904613af7b619590987ba4b575f3c54e81e"} Jan 23 09:46:04 crc kubenswrapper[4982]: I0123 09:46:04.839978 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" podStartSLOduration=2.075492823 podStartE2EDuration="2.839940604s" podCreationTimestamp="2026-01-23 09:46:02 +0000 UTC" firstStartedPulling="2026-01-23 09:46:03.088561701 +0000 UTC m=+6637.568925087" lastFinishedPulling="2026-01-23 09:46:03.853009462 +0000 UTC m=+6638.333372868" observedRunningTime="2026-01-23 09:46:04.816950008 +0000 UTC m=+6639.297313384" watchObservedRunningTime="2026-01-23 09:46:04.839940604 +0000 UTC m=+6639.320304070" Jan 23 09:46:12 crc kubenswrapper[4982]: I0123 09:46:12.827406 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:46:12 crc kubenswrapper[4982]: E0123 09:46:12.828398 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:46:24 crc kubenswrapper[4982]: I0123 09:46:24.829127 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:46:24 crc kubenswrapper[4982]: E0123 09:46:24.829955 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:46:36 crc kubenswrapper[4982]: I0123 09:46:36.826967 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:46:36 crc kubenswrapper[4982]: E0123 09:46:36.827614 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:46:44 crc kubenswrapper[4982]: I0123 09:46:44.041377 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-jhg7l"] Jan 23 09:46:44 crc kubenswrapper[4982]: I0123 09:46:44.057358 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-jhg7l"] Jan 23 09:46:45 crc kubenswrapper[4982]: I0123 09:46:45.042474 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-4217-account-create-update-hjp95"] Jan 23 09:46:45 crc kubenswrapper[4982]: I0123 09:46:45.056396 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-4217-account-create-update-hjp95"] Jan 23 09:46:45 crc kubenswrapper[4982]: I0123 09:46:45.842599 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ac8822-3255-4dd1-916e-283f0711ae83" path="/var/lib/kubelet/pods/73ac8822-3255-4dd1-916e-283f0711ae83/volumes" Jan 23 09:46:45 crc kubenswrapper[4982]: I0123 09:46:45.844015 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24b3bf7-b1c7-47da-8799-b5888e75bda2" path="/var/lib/kubelet/pods/e24b3bf7-b1c7-47da-8799-b5888e75bda2/volumes" Jan 23 09:46:47 crc kubenswrapper[4982]: I0123 09:46:47.884460 4982 scope.go:117] "RemoveContainer" containerID="885b8458a4ac3290eb1fff620a9a2e89da32b06b3d62b667462de46fab70a36a" Jan 23 09:46:47 crc kubenswrapper[4982]: I0123 09:46:47.915848 4982 scope.go:117] "RemoveContainer" containerID="bbc5c35e8ed7bf8ee3ef216257e5254e47f351e3348fb3a5474101f9c202141d" Jan 23 09:46:50 crc kubenswrapper[4982]: I0123 09:46:50.053240 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-j8crf"] Jan 23 09:46:50 crc kubenswrapper[4982]: I0123 09:46:50.068558 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-j8crf"] Jan 23 09:46:50 crc kubenswrapper[4982]: I0123 09:46:50.828352 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:46:50 crc kubenswrapper[4982]: E0123 09:46:50.828861 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:46:51 crc kubenswrapper[4982]: I0123 09:46:51.026003 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-d93b-account-create-update-m89hf"] Jan 23 09:46:51 crc kubenswrapper[4982]: I0123 09:46:51.040059 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-d93b-account-create-update-m89hf"] Jan 23 09:46:51 crc kubenswrapper[4982]: I0123 09:46:51.841151 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bbe8afc-68f3-44eb-bee2-f4b5783d32e0" path="/var/lib/kubelet/pods/0bbe8afc-68f3-44eb-bee2-f4b5783d32e0/volumes" Jan 23 09:46:51 crc kubenswrapper[4982]: I0123 09:46:51.841814 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a524b9a0-514a-4af7-a8f2-2f0458db6db8" path="/var/lib/kubelet/pods/a524b9a0-514a-4af7-a8f2-2f0458db6db8/volumes" Jan 23 09:47:03 crc kubenswrapper[4982]: I0123 09:47:03.827727 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:47:03 crc kubenswrapper[4982]: E0123 09:47:03.828548 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:47:18 crc kubenswrapper[4982]: I0123 09:47:18.828220 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:47:18 crc kubenswrapper[4982]: E0123 09:47:18.829060 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:47:31 crc kubenswrapper[4982]: I0123 09:47:31.060412 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-pbztw"] Jan 23 09:47:31 crc kubenswrapper[4982]: I0123 09:47:31.071320 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-pbztw"] Jan 23 09:47:31 crc kubenswrapper[4982]: I0123 09:47:31.845227 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8a87ff-e284-4a46-9236-b0a0d530a474" path="/var/lib/kubelet/pods/7f8a87ff-e284-4a46-9236-b0a0d530a474/volumes" Jan 23 09:47:32 crc kubenswrapper[4982]: I0123 09:47:32.827870 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:47:32 crc kubenswrapper[4982]: E0123 09:47:32.828474 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:47:44 crc kubenswrapper[4982]: I0123 09:47:44.827881 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:47:46 crc kubenswrapper[4982]: I0123 09:47:46.010030 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"125784f8292abf2b5b21751e74054cc18284f00b2963365788c2199c0e04e59f"} Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.097429 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wb7zx"] Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.102042 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.116985 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wb7zx"] Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.202108 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-catalog-content\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.202208 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-utilities\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.202237 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvj9w\" (UniqueName: \"kubernetes.io/projected/7bb31f87-7985-427b-8190-63e4bc39ba85-kube-api-access-wvj9w\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.304046 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-catalog-content\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.304123 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-utilities\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.304146 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvj9w\" (UniqueName: \"kubernetes.io/projected/7bb31f87-7985-427b-8190-63e4bc39ba85-kube-api-access-wvj9w\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.304579 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-catalog-content\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.304659 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-utilities\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.325750 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvj9w\" (UniqueName: \"kubernetes.io/projected/7bb31f87-7985-427b-8190-63e4bc39ba85-kube-api-access-wvj9w\") pod \"redhat-operators-wb7zx\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.450334 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:47 crc kubenswrapper[4982]: I0123 09:47:47.916321 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wb7zx"] Jan 23 09:47:47 crc kubenswrapper[4982]: W0123 09:47:47.918723 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb31f87_7985_427b_8190_63e4bc39ba85.slice/crio-7bfc7e295654bd9f3b9c86a9352cb07b7b9e6f464ef77c8446220dffa479f0e3 WatchSource:0}: Error finding container 7bfc7e295654bd9f3b9c86a9352cb07b7b9e6f464ef77c8446220dffa479f0e3: Status 404 returned error can't find the container with id 7bfc7e295654bd9f3b9c86a9352cb07b7b9e6f464ef77c8446220dffa479f0e3 Jan 23 09:47:48 crc kubenswrapper[4982]: I0123 09:47:48.037086 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7zx" event={"ID":"7bb31f87-7985-427b-8190-63e4bc39ba85","Type":"ContainerStarted","Data":"7bfc7e295654bd9f3b9c86a9352cb07b7b9e6f464ef77c8446220dffa479f0e3"} Jan 23 09:47:48 crc kubenswrapper[4982]: I0123 09:47:48.080931 4982 scope.go:117] "RemoveContainer" containerID="ae9f70ab14474e90d829beb8b6437d0ca92d7399d2a12a31d3a334f8572fe7e0" Jan 23 09:47:48 crc kubenswrapper[4982]: I0123 09:47:48.109555 4982 scope.go:117] "RemoveContainer" containerID="a6dcf5bdabf0bd21feaeaf885163339654890598b7aa0d97ba2ac94be3d34d7e" Jan 23 09:47:48 crc kubenswrapper[4982]: I0123 09:47:48.138689 4982 scope.go:117] "RemoveContainer" containerID="b1ecdafc6682595a2fdeb08cdf02965e107d4cd97035f2bf2478a4e9800bd027" Jan 23 09:47:48 crc kubenswrapper[4982]: I0123 09:47:48.160244 4982 scope.go:117] "RemoveContainer" containerID="b84e9df49d998bde7cf54db0910bf03913d913e0dc954f505f10e9ed8d34663e" Jan 23 09:47:49 crc kubenswrapper[4982]: I0123 09:47:49.047859 4982 generic.go:334] "Generic (PLEG): container finished" podID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerID="e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746" exitCode=0 Jan 23 09:47:49 crc kubenswrapper[4982]: I0123 09:47:49.047908 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7zx" event={"ID":"7bb31f87-7985-427b-8190-63e4bc39ba85","Type":"ContainerDied","Data":"e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746"} Jan 23 09:47:49 crc kubenswrapper[4982]: I0123 09:47:49.050426 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:47:50 crc kubenswrapper[4982]: I0123 09:47:50.074053 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7zx" event={"ID":"7bb31f87-7985-427b-8190-63e4bc39ba85","Type":"ContainerStarted","Data":"b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933"} Jan 23 09:47:54 crc kubenswrapper[4982]: I0123 09:47:54.167924 4982 generic.go:334] "Generic (PLEG): container finished" podID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerID="b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933" exitCode=0 Jan 23 09:47:54 crc kubenswrapper[4982]: I0123 09:47:54.168089 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7zx" event={"ID":"7bb31f87-7985-427b-8190-63e4bc39ba85","Type":"ContainerDied","Data":"b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933"} Jan 23 09:47:55 crc kubenswrapper[4982]: I0123 09:47:55.182651 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7zx" event={"ID":"7bb31f87-7985-427b-8190-63e4bc39ba85","Type":"ContainerStarted","Data":"d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07"} Jan 23 09:47:55 crc kubenswrapper[4982]: I0123 09:47:55.218147 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wb7zx" podStartSLOduration=2.679765477 podStartE2EDuration="8.21812067s" podCreationTimestamp="2026-01-23 09:47:47 +0000 UTC" firstStartedPulling="2026-01-23 09:47:49.050211733 +0000 UTC m=+6743.530575279" lastFinishedPulling="2026-01-23 09:47:54.588567086 +0000 UTC m=+6749.068930472" observedRunningTime="2026-01-23 09:47:55.20637182 +0000 UTC m=+6749.686735206" watchObservedRunningTime="2026-01-23 09:47:55.21812067 +0000 UTC m=+6749.698484056" Jan 23 09:47:57 crc kubenswrapper[4982]: I0123 09:47:57.451423 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:57 crc kubenswrapper[4982]: I0123 09:47:57.451986 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:47:58 crc kubenswrapper[4982]: I0123 09:47:58.503447 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wb7zx" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="registry-server" probeResult="failure" output=< Jan 23 09:47:58 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 09:47:58 crc kubenswrapper[4982]: > Jan 23 09:48:07 crc kubenswrapper[4982]: I0123 09:48:07.498337 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:48:07 crc kubenswrapper[4982]: I0123 09:48:07.565033 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:48:07 crc kubenswrapper[4982]: I0123 09:48:07.734958 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wb7zx"] Jan 23 09:48:09 crc kubenswrapper[4982]: I0123 09:48:09.369879 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wb7zx" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="registry-server" containerID="cri-o://d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07" gracePeriod=2 Jan 23 09:48:09 crc kubenswrapper[4982]: I0123 09:48:09.906794 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.031080 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvj9w\" (UniqueName: \"kubernetes.io/projected/7bb31f87-7985-427b-8190-63e4bc39ba85-kube-api-access-wvj9w\") pod \"7bb31f87-7985-427b-8190-63e4bc39ba85\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.031172 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-catalog-content\") pod \"7bb31f87-7985-427b-8190-63e4bc39ba85\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.031310 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-utilities\") pod \"7bb31f87-7985-427b-8190-63e4bc39ba85\" (UID: \"7bb31f87-7985-427b-8190-63e4bc39ba85\") " Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.032108 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-utilities" (OuterVolumeSpecName: "utilities") pod "7bb31f87-7985-427b-8190-63e4bc39ba85" (UID: "7bb31f87-7985-427b-8190-63e4bc39ba85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.040920 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb31f87-7985-427b-8190-63e4bc39ba85-kube-api-access-wvj9w" (OuterVolumeSpecName: "kube-api-access-wvj9w") pod "7bb31f87-7985-427b-8190-63e4bc39ba85" (UID: "7bb31f87-7985-427b-8190-63e4bc39ba85"). InnerVolumeSpecName "kube-api-access-wvj9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.134852 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvj9w\" (UniqueName: \"kubernetes.io/projected/7bb31f87-7985-427b-8190-63e4bc39ba85-kube-api-access-wvj9w\") on node \"crc\" DevicePath \"\"" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.134912 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.170347 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bb31f87-7985-427b-8190-63e4bc39ba85" (UID: "7bb31f87-7985-427b-8190-63e4bc39ba85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.237092 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bb31f87-7985-427b-8190-63e4bc39ba85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.391905 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7zx" event={"ID":"7bb31f87-7985-427b-8190-63e4bc39ba85","Type":"ContainerDied","Data":"d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07"} Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.391943 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wb7zx" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.391978 4982 scope.go:117] "RemoveContainer" containerID="d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.392146 4982 generic.go:334] "Generic (PLEG): container finished" podID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerID="d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07" exitCode=0 Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.392172 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wb7zx" event={"ID":"7bb31f87-7985-427b-8190-63e4bc39ba85","Type":"ContainerDied","Data":"7bfc7e295654bd9f3b9c86a9352cb07b7b9e6f464ef77c8446220dffa479f0e3"} Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.429349 4982 scope.go:117] "RemoveContainer" containerID="b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.438580 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wb7zx"] Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.448512 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wb7zx"] Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.457248 4982 scope.go:117] "RemoveContainer" containerID="e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.509182 4982 scope.go:117] "RemoveContainer" containerID="d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07" Jan 23 09:48:10 crc kubenswrapper[4982]: E0123 09:48:10.510260 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07\": container with ID starting with d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07 not found: ID does not exist" containerID="d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.510339 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07"} err="failed to get container status \"d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07\": rpc error: code = NotFound desc = could not find container \"d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07\": container with ID starting with d919b9119b31c663a4455d2e1f0619227e8420fd98773295bd7dfd31a03a9f07 not found: ID does not exist" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.510401 4982 scope.go:117] "RemoveContainer" containerID="b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933" Jan 23 09:48:10 crc kubenswrapper[4982]: E0123 09:48:10.510907 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933\": container with ID starting with b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933 not found: ID does not exist" containerID="b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.510941 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933"} err="failed to get container status \"b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933\": rpc error: code = NotFound desc = could not find container \"b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933\": container with ID starting with b1f44eb4b0d78d04569511bee0129ed6d399881ed691550c88fa07c37c271933 not found: ID does not exist" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.510965 4982 scope.go:117] "RemoveContainer" containerID="e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746" Jan 23 09:48:10 crc kubenswrapper[4982]: E0123 09:48:10.511293 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746\": container with ID starting with e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746 not found: ID does not exist" containerID="e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746" Jan 23 09:48:10 crc kubenswrapper[4982]: I0123 09:48:10.511326 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746"} err="failed to get container status \"e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746\": rpc error: code = NotFound desc = could not find container \"e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746\": container with ID starting with e6c48486655066838b9c37712fe878c841c62e254d9c7546b103f3cf3284f746 not found: ID does not exist" Jan 23 09:48:11 crc kubenswrapper[4982]: I0123 09:48:11.853384 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" path="/var/lib/kubelet/pods/7bb31f87-7985-427b-8190-63e4bc39ba85/volumes" Jan 23 09:49:31 crc kubenswrapper[4982]: I0123 09:49:31.792549 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-949fbbb54-chlc7" podUID="7e007917-bb3f-4c30-a0a6-842f1a1cbb34" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 23 09:50:11 crc kubenswrapper[4982]: I0123 09:50:11.435833 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:50:11 crc kubenswrapper[4982]: I0123 09:50:11.436453 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:50:41 crc kubenswrapper[4982]: I0123 09:50:41.435762 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:50:41 crc kubenswrapper[4982]: I0123 09:50:41.436573 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:50:48 crc kubenswrapper[4982]: I0123 09:50:48.355093 4982 scope.go:117] "RemoveContainer" containerID="8be3801089a0054ee5584da7dde8ac900b4bcc44c60ce7b3f2b94608fd6f1ca6" Jan 23 09:50:48 crc kubenswrapper[4982]: I0123 09:50:48.400163 4982 scope.go:117] "RemoveContainer" containerID="da09384ad1ca1ec3bfbaa53fe1befaef1b13d77605c45ee9b609b6f47154c06a" Jan 23 09:50:48 crc kubenswrapper[4982]: I0123 09:50:48.427511 4982 scope.go:117] "RemoveContainer" containerID="8edf98ae4e4a6c484cd3a1655862fca76d88f9bb4ea45fc9366b09c62aef0639" Jan 23 09:50:48 crc kubenswrapper[4982]: I0123 09:50:48.453578 4982 scope.go:117] "RemoveContainer" containerID="05e51b73f4ede1dffa500218a4d8c855dc9881b7b9c3d50413d6f2c1caef11d1" Jan 23 09:51:09 crc kubenswrapper[4982]: I0123 09:51:09.066391 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-48xj2"] Jan 23 09:51:09 crc kubenswrapper[4982]: I0123 09:51:09.080387 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-48xj2"] Jan 23 09:51:09 crc kubenswrapper[4982]: I0123 09:51:09.841841 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c5b764-69e5-48e3-95fe-6a3e87500b01" path="/var/lib/kubelet/pods/42c5b764-69e5-48e3-95fe-6a3e87500b01/volumes" Jan 23 09:51:10 crc kubenswrapper[4982]: I0123 09:51:10.033781 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-8ece-account-create-update-rczvd"] Jan 23 09:51:10 crc kubenswrapper[4982]: I0123 09:51:10.045894 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-8ece-account-create-update-rczvd"] Jan 23 09:51:11 crc kubenswrapper[4982]: I0123 09:51:11.435566 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:51:11 crc kubenswrapper[4982]: I0123 09:51:11.435871 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:51:11 crc kubenswrapper[4982]: I0123 09:51:11.435915 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:51:11 crc kubenswrapper[4982]: I0123 09:51:11.436730 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"125784f8292abf2b5b21751e74054cc18284f00b2963365788c2199c0e04e59f"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:51:11 crc kubenswrapper[4982]: I0123 09:51:11.436780 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://125784f8292abf2b5b21751e74054cc18284f00b2963365788c2199c0e04e59f" gracePeriod=600 Jan 23 09:51:11 crc kubenswrapper[4982]: I0123 09:51:11.850550 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc82f838-20c6-440b-9526-cefa8af7a820" path="/var/lib/kubelet/pods/dc82f838-20c6-440b-9526-cefa8af7a820/volumes" Jan 23 09:51:12 crc kubenswrapper[4982]: I0123 09:51:12.476987 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="125784f8292abf2b5b21751e74054cc18284f00b2963365788c2199c0e04e59f" exitCode=0 Jan 23 09:51:12 crc kubenswrapper[4982]: I0123 09:51:12.477053 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"125784f8292abf2b5b21751e74054cc18284f00b2963365788c2199c0e04e59f"} Jan 23 09:51:12 crc kubenswrapper[4982]: I0123 09:51:12.477406 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4"} Jan 23 09:51:12 crc kubenswrapper[4982]: I0123 09:51:12.477439 4982 scope.go:117] "RemoveContainer" containerID="f50d7a2d39c3a2fd07de3d2117ae9e70decf76b0697db4a10d46aed46557dd29" Jan 23 09:51:25 crc kubenswrapper[4982]: I0123 09:51:25.032936 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-xxwv2"] Jan 23 09:51:25 crc kubenswrapper[4982]: I0123 09:51:25.043738 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-xxwv2"] Jan 23 09:51:25 crc kubenswrapper[4982]: I0123 09:51:25.843414 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc085d0e-3b5f-40ac-b0fd-a5da6c671173" path="/var/lib/kubelet/pods/cc085d0e-3b5f-40ac-b0fd-a5da6c671173/volumes" Jan 23 09:51:29 crc kubenswrapper[4982]: I0123 09:51:28.999798 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-nqp77" podUID="05772d57-547c-4312-9662-be516b1eff34" containerName="registry-server" probeResult="failure" output=< Jan 23 09:51:29 crc kubenswrapper[4982]: timeout: health rpc did not complete within 1s Jan 23 09:51:29 crc kubenswrapper[4982]: > Jan 23 09:51:29 crc kubenswrapper[4982]: I0123 09:51:29.300458 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-nqp77" podUID="05772d57-547c-4312-9662-be516b1eff34" containerName="registry-server" probeResult="failure" output=< Jan 23 09:51:29 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 09:51:29 crc kubenswrapper[4982]: > Jan 23 09:51:48 crc kubenswrapper[4982]: I0123 09:51:48.520541 4982 scope.go:117] "RemoveContainer" containerID="629922a47912269303bf2d86e91efd3b3de349600ffd03f52a2a9e7ea49c42ba" Jan 23 09:51:48 crc kubenswrapper[4982]: I0123 09:51:48.543566 4982 scope.go:117] "RemoveContainer" containerID="40623f9098be15db43ad4b033f6f4c3cde1bbdf13d7159bbe8d986172b18a330" Jan 23 09:51:48 crc kubenswrapper[4982]: I0123 09:51:48.613448 4982 scope.go:117] "RemoveContainer" containerID="85b06b5030db05e3c9e2e029f4b6e88c32cca69cf48df394d9eda4b87aaa24f0" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.053080 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-db4wc"] Jan 23 09:52:13 crc kubenswrapper[4982]: E0123 09:52:13.054775 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="extract-content" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.054793 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="extract-content" Jan 23 09:52:13 crc kubenswrapper[4982]: E0123 09:52:13.054841 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="extract-utilities" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.054849 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="extract-utilities" Jan 23 09:52:13 crc kubenswrapper[4982]: E0123 09:52:13.054858 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="registry-server" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.054865 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="registry-server" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.055377 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bb31f87-7985-427b-8190-63e4bc39ba85" containerName="registry-server" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.058871 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.070075 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4wc"] Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.125370 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bbtp\" (UniqueName: \"kubernetes.io/projected/af28be07-684e-496b-a1c0-8399a6397586-kube-api-access-5bbtp\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.125547 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-catalog-content\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.125639 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-utilities\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.228079 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-catalog-content\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.228194 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-utilities\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.228288 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bbtp\" (UniqueName: \"kubernetes.io/projected/af28be07-684e-496b-a1c0-8399a6397586-kube-api-access-5bbtp\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.228559 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-catalog-content\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.229126 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-utilities\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.247413 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bbtp\" (UniqueName: \"kubernetes.io/projected/af28be07-684e-496b-a1c0-8399a6397586-kube-api-access-5bbtp\") pod \"redhat-marketplace-db4wc\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.393253 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:13 crc kubenswrapper[4982]: I0123 09:52:13.925460 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4wc"] Jan 23 09:52:14 crc kubenswrapper[4982]: I0123 09:52:14.184717 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4wc" event={"ID":"af28be07-684e-496b-a1c0-8399a6397586","Type":"ContainerStarted","Data":"4116066cfb4f2814db0fa57887692b970eb51681c7c909694d12d04a6298be12"} Jan 23 09:52:14 crc kubenswrapper[4982]: I0123 09:52:14.185315 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4wc" event={"ID":"af28be07-684e-496b-a1c0-8399a6397586","Type":"ContainerStarted","Data":"246d1272ba272ddffc61e3966cba53eb6f8baba9f589cdb58c947a25aeb194fb"} Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.199117 4982 generic.go:334] "Generic (PLEG): container finished" podID="af28be07-684e-496b-a1c0-8399a6397586" containerID="4116066cfb4f2814db0fa57887692b970eb51681c7c909694d12d04a6298be12" exitCode=0 Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.199184 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4wc" event={"ID":"af28be07-684e-496b-a1c0-8399a6397586","Type":"ContainerDied","Data":"4116066cfb4f2814db0fa57887692b970eb51681c7c909694d12d04a6298be12"} Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.614360 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cpmc2"] Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.617398 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.628425 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpmc2"] Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.686563 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-utilities\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.686761 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrlx\" (UniqueName: \"kubernetes.io/projected/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-kube-api-access-jbrlx\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.686951 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-catalog-content\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.789503 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-catalog-content\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.790022 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-catalog-content\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.790181 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-utilities\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.790237 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrlx\" (UniqueName: \"kubernetes.io/projected/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-kube-api-access-jbrlx\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.790743 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-utilities\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.824797 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrlx\" (UniqueName: \"kubernetes.io/projected/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-kube-api-access-jbrlx\") pod \"community-operators-cpmc2\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:15 crc kubenswrapper[4982]: I0123 09:52:15.986676 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:16 crc kubenswrapper[4982]: I0123 09:52:16.218735 4982 generic.go:334] "Generic (PLEG): container finished" podID="af28be07-684e-496b-a1c0-8399a6397586" containerID="8084e4f1756a0d69a9d8b4fb34535db44137cbf65124656a90263173e66839ae" exitCode=0 Jan 23 09:52:16 crc kubenswrapper[4982]: I0123 09:52:16.218805 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4wc" event={"ID":"af28be07-684e-496b-a1c0-8399a6397586","Type":"ContainerDied","Data":"8084e4f1756a0d69a9d8b4fb34535db44137cbf65124656a90263173e66839ae"} Jan 23 09:52:16 crc kubenswrapper[4982]: I0123 09:52:16.564771 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cpmc2"] Jan 23 09:52:17 crc kubenswrapper[4982]: I0123 09:52:17.229957 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4wc" event={"ID":"af28be07-684e-496b-a1c0-8399a6397586","Type":"ContainerStarted","Data":"c9d071a9ce7005d9c9625e2d9784c5bb399f532f4f19f7399852e9cfab67416d"} Jan 23 09:52:17 crc kubenswrapper[4982]: I0123 09:52:17.232417 4982 generic.go:334] "Generic (PLEG): container finished" podID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerID="dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e" exitCode=0 Jan 23 09:52:17 crc kubenswrapper[4982]: I0123 09:52:17.232460 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpmc2" event={"ID":"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11","Type":"ContainerDied","Data":"dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e"} Jan 23 09:52:17 crc kubenswrapper[4982]: I0123 09:52:17.232484 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpmc2" event={"ID":"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11","Type":"ContainerStarted","Data":"83ce7a84cd767484f80ffbb557ee98e76dbc52eb3f8a0c3d8ae26fc5495c0655"} Jan 23 09:52:17 crc kubenswrapper[4982]: I0123 09:52:17.258927 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-db4wc" podStartSLOduration=2.7290240900000002 podStartE2EDuration="5.258904265s" podCreationTimestamp="2026-01-23 09:52:12 +0000 UTC" firstStartedPulling="2026-01-23 09:52:14.186871888 +0000 UTC m=+7008.667235274" lastFinishedPulling="2026-01-23 09:52:16.716752063 +0000 UTC m=+7011.197115449" observedRunningTime="2026-01-23 09:52:17.249331597 +0000 UTC m=+7011.729695003" watchObservedRunningTime="2026-01-23 09:52:17.258904265 +0000 UTC m=+7011.739267661" Jan 23 09:52:18 crc kubenswrapper[4982]: I0123 09:52:18.250205 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpmc2" event={"ID":"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11","Type":"ContainerStarted","Data":"e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa"} Jan 23 09:52:20 crc kubenswrapper[4982]: I0123 09:52:20.278335 4982 generic.go:334] "Generic (PLEG): container finished" podID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerID="e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa" exitCode=0 Jan 23 09:52:20 crc kubenswrapper[4982]: I0123 09:52:20.278715 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpmc2" event={"ID":"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11","Type":"ContainerDied","Data":"e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa"} Jan 23 09:52:21 crc kubenswrapper[4982]: I0123 09:52:21.293658 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpmc2" event={"ID":"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11","Type":"ContainerStarted","Data":"c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574"} Jan 23 09:52:21 crc kubenswrapper[4982]: I0123 09:52:21.331604 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cpmc2" podStartSLOduration=2.8371293 podStartE2EDuration="6.331584557s" podCreationTimestamp="2026-01-23 09:52:15 +0000 UTC" firstStartedPulling="2026-01-23 09:52:17.233655514 +0000 UTC m=+7011.714018900" lastFinishedPulling="2026-01-23 09:52:20.728110771 +0000 UTC m=+7015.208474157" observedRunningTime="2026-01-23 09:52:21.320521289 +0000 UTC m=+7015.800884685" watchObservedRunningTime="2026-01-23 09:52:21.331584557 +0000 UTC m=+7015.811947943" Jan 23 09:52:23 crc kubenswrapper[4982]: I0123 09:52:23.394239 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:23 crc kubenswrapper[4982]: I0123 09:52:23.394888 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:23 crc kubenswrapper[4982]: I0123 09:52:23.446588 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:24 crc kubenswrapper[4982]: I0123 09:52:24.378827 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:25 crc kubenswrapper[4982]: I0123 09:52:25.987769 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:25 crc kubenswrapper[4982]: I0123 09:52:25.988126 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:26 crc kubenswrapper[4982]: I0123 09:52:26.097564 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:26 crc kubenswrapper[4982]: I0123 09:52:26.389246 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:26 crc kubenswrapper[4982]: I0123 09:52:26.806807 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4wc"] Jan 23 09:52:26 crc kubenswrapper[4982]: I0123 09:52:26.807279 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-db4wc" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="registry-server" containerID="cri-o://c9d071a9ce7005d9c9625e2d9784c5bb399f532f4f19f7399852e9cfab67416d" gracePeriod=2 Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.354828 4982 generic.go:334] "Generic (PLEG): container finished" podID="af28be07-684e-496b-a1c0-8399a6397586" containerID="c9d071a9ce7005d9c9625e2d9784c5bb399f532f4f19f7399852e9cfab67416d" exitCode=0 Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.354880 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4wc" event={"ID":"af28be07-684e-496b-a1c0-8399a6397586","Type":"ContainerDied","Data":"c9d071a9ce7005d9c9625e2d9784c5bb399f532f4f19f7399852e9cfab67416d"} Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.355237 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-db4wc" event={"ID":"af28be07-684e-496b-a1c0-8399a6397586","Type":"ContainerDied","Data":"246d1272ba272ddffc61e3966cba53eb6f8baba9f589cdb58c947a25aeb194fb"} Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.355284 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246d1272ba272ddffc61e3966cba53eb6f8baba9f589cdb58c947a25aeb194fb" Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.387973 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.404171 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpmc2"] Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.490178 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bbtp\" (UniqueName: \"kubernetes.io/projected/af28be07-684e-496b-a1c0-8399a6397586-kube-api-access-5bbtp\") pod \"af28be07-684e-496b-a1c0-8399a6397586\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.490606 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-utilities\") pod \"af28be07-684e-496b-a1c0-8399a6397586\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.490868 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-catalog-content\") pod \"af28be07-684e-496b-a1c0-8399a6397586\" (UID: \"af28be07-684e-496b-a1c0-8399a6397586\") " Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.491791 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-utilities" (OuterVolumeSpecName: "utilities") pod "af28be07-684e-496b-a1c0-8399a6397586" (UID: "af28be07-684e-496b-a1c0-8399a6397586"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.496443 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af28be07-684e-496b-a1c0-8399a6397586-kube-api-access-5bbtp" (OuterVolumeSpecName: "kube-api-access-5bbtp") pod "af28be07-684e-496b-a1c0-8399a6397586" (UID: "af28be07-684e-496b-a1c0-8399a6397586"). InnerVolumeSpecName "kube-api-access-5bbtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.511717 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af28be07-684e-496b-a1c0-8399a6397586" (UID: "af28be07-684e-496b-a1c0-8399a6397586"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.593102 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.593134 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bbtp\" (UniqueName: \"kubernetes.io/projected/af28be07-684e-496b-a1c0-8399a6397586-kube-api-access-5bbtp\") on node \"crc\" DevicePath \"\"" Jan 23 09:52:27 crc kubenswrapper[4982]: I0123 09:52:27.593145 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af28be07-684e-496b-a1c0-8399a6397586-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:52:28 crc kubenswrapper[4982]: I0123 09:52:28.364984 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-db4wc" Jan 23 09:52:28 crc kubenswrapper[4982]: I0123 09:52:28.365157 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cpmc2" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="registry-server" containerID="cri-o://c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574" gracePeriod=2 Jan 23 09:52:28 crc kubenswrapper[4982]: I0123 09:52:28.395510 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4wc"] Jan 23 09:52:28 crc kubenswrapper[4982]: I0123 09:52:28.409873 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-db4wc"] Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.362747 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.378176 4982 generic.go:334] "Generic (PLEG): container finished" podID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerID="c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574" exitCode=0 Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.378242 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpmc2" event={"ID":"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11","Type":"ContainerDied","Data":"c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574"} Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.378324 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cpmc2" event={"ID":"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11","Type":"ContainerDied","Data":"83ce7a84cd767484f80ffbb557ee98e76dbc52eb3f8a0c3d8ae26fc5495c0655"} Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.378355 4982 scope.go:117] "RemoveContainer" containerID="c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.378269 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cpmc2" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.403522 4982 scope.go:117] "RemoveContainer" containerID="e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.454150 4982 scope.go:117] "RemoveContainer" containerID="dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.528342 4982 scope.go:117] "RemoveContainer" containerID="c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574" Jan 23 09:52:29 crc kubenswrapper[4982]: E0123 09:52:29.528921 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574\": container with ID starting with c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574 not found: ID does not exist" containerID="c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.528960 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574"} err="failed to get container status \"c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574\": rpc error: code = NotFound desc = could not find container \"c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574\": container with ID starting with c769f3ba2bea0e31f38bc3bf0fe66e015b7317897b9889db67aa94ae41c7f574 not found: ID does not exist" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.528982 4982 scope.go:117] "RemoveContainer" containerID="e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa" Jan 23 09:52:29 crc kubenswrapper[4982]: E0123 09:52:29.529360 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa\": container with ID starting with e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa not found: ID does not exist" containerID="e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.529417 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa"} err="failed to get container status \"e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa\": rpc error: code = NotFound desc = could not find container \"e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa\": container with ID starting with e48ca039a5abd1f473918247b8f42bed9295e918ff53288254a84dacf74da4aa not found: ID does not exist" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.529454 4982 scope.go:117] "RemoveContainer" containerID="dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e" Jan 23 09:52:29 crc kubenswrapper[4982]: E0123 09:52:29.529812 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e\": container with ID starting with dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e not found: ID does not exist" containerID="dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.529837 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e"} err="failed to get container status \"dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e\": rpc error: code = NotFound desc = could not find container \"dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e\": container with ID starting with dea04c70a36d055023715ad80a48bd7b360dd0925e5f1dcd54736a4ebaa7b29e not found: ID does not exist" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.534155 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-utilities\") pod \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.534332 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-catalog-content\") pod \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.534812 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrlx\" (UniqueName: \"kubernetes.io/projected/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-kube-api-access-jbrlx\") pod \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\" (UID: \"6066ca82-cbbf-4b29-9ce3-cbb6b4739c11\") " Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.534971 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-utilities" (OuterVolumeSpecName: "utilities") pod "6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" (UID: "6066ca82-cbbf-4b29-9ce3-cbb6b4739c11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.535752 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.540715 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-kube-api-access-jbrlx" (OuterVolumeSpecName: "kube-api-access-jbrlx") pod "6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" (UID: "6066ca82-cbbf-4b29-9ce3-cbb6b4739c11"). InnerVolumeSpecName "kube-api-access-jbrlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.583118 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" (UID: "6066ca82-cbbf-4b29-9ce3-cbb6b4739c11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.639649 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrlx\" (UniqueName: \"kubernetes.io/projected/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-kube-api-access-jbrlx\") on node \"crc\" DevicePath \"\"" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.639695 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.726954 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cpmc2"] Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.733714 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cpmc2"] Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.842402 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" path="/var/lib/kubelet/pods/6066ca82-cbbf-4b29-9ce3-cbb6b4739c11/volumes" Jan 23 09:52:29 crc kubenswrapper[4982]: I0123 09:52:29.843209 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af28be07-684e-496b-a1c0-8399a6397586" path="/var/lib/kubelet/pods/af28be07-684e-496b-a1c0-8399a6397586/volumes" Jan 23 09:53:11 crc kubenswrapper[4982]: I0123 09:53:11.435639 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:53:11 crc kubenswrapper[4982]: I0123 09:53:11.436237 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:53:34 crc kubenswrapper[4982]: I0123 09:53:34.761439 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 23 09:53:34 crc kubenswrapper[4982]: I0123 09:53:34.761963 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 09:53:36 crc kubenswrapper[4982]: I0123 09:53:36.573979 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.56:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:53:39 crc kubenswrapper[4982]: I0123 09:53:39.760602 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 09:53:41 crc kubenswrapper[4982]: I0123 09:53:41.435523 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:53:41 crc kubenswrapper[4982]: I0123 09:53:41.435576 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:53:44 crc kubenswrapper[4982]: I0123 09:53:44.759655 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 09:53:44 crc kubenswrapper[4982]: I0123 09:53:44.760688 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 23 09:53:44 crc kubenswrapper[4982]: I0123 09:53:44.762113 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"8582593d320eb1b6aefbbb745ffb294efe9145049bb2ac22b666a4f7f375008a"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 23 09:53:44 crc kubenswrapper[4982]: I0123 09:53:44.762279 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-central-agent" containerID="cri-o://8582593d320eb1b6aefbbb745ffb294efe9145049bb2ac22b666a4f7f375008a" gracePeriod=30 Jan 23 09:53:49 crc kubenswrapper[4982]: I0123 09:53:49.898590 4982 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-lc9z5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:53:49 crc kubenswrapper[4982]: I0123 09:53:49.899377 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lc9z5" podUID="352ea8f0-9969-4a48-9b37-c77819f1473a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:53:53 crc kubenswrapper[4982]: I0123 09:53:53.058572 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-f8fsb"] Jan 23 09:53:53 crc kubenswrapper[4982]: I0123 09:53:53.074560 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-f8fsb"] Jan 23 09:53:53 crc kubenswrapper[4982]: I0123 09:53:53.848704 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825b7887-ce31-452f-aaf6-450627701e7f" path="/var/lib/kubelet/pods/825b7887-ce31-452f-aaf6-450627701e7f/volumes" Jan 23 09:53:54 crc kubenswrapper[4982]: I0123 09:53:54.034031 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-c016-account-create-update-c97cm"] Jan 23 09:53:54 crc kubenswrapper[4982]: I0123 09:53:54.048130 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-c016-account-create-update-c97cm"] Jan 23 09:53:55 crc kubenswrapper[4982]: I0123 09:53:55.876433 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355d991f-28e4-4c77-9e2c-98edf4142b9b" path="/var/lib/kubelet/pods/355d991f-28e4-4c77-9e2c-98edf4142b9b/volumes" Jan 23 09:53:58 crc kubenswrapper[4982]: I0123 09:53:58.532900 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:02 crc kubenswrapper[4982]: I0123 09:54:02.309660 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:02 crc kubenswrapper[4982]: I0123 09:54:02.312930 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:03 crc kubenswrapper[4982]: I0123 09:54:03.758847 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 23 09:54:05 crc kubenswrapper[4982]: I0123 09:54:05.833791 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="90a4acf4-9689-4375-b94f-fad12c76e1d8" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.172:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:07 crc kubenswrapper[4982]: I0123 09:54:07.310019 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:07 crc kubenswrapper[4982]: I0123 09:54:07.310064 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:07 crc kubenswrapper[4982]: I0123 09:54:07.758508 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1e221cae-f304-4f00-be53-5a72edbf3bd7" containerName="galera" probeResult="failure" output="command timed out" Jan 23 09:54:07 crc kubenswrapper[4982]: I0123 09:54:07.760809 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1e221cae-f304-4f00-be53-5a72edbf3bd7" containerName="galera" probeResult="failure" output="command timed out" Jan 23 09:54:08 crc kubenswrapper[4982]: I0123 09:54:08.530934 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:08 crc kubenswrapper[4982]: I0123 09:54:08.531000 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:08 crc kubenswrapper[4982]: I0123 09:54:08.674886 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podUID="5656681c-2a30-47fa-bb61-f33660c36db0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:08 crc kubenswrapper[4982]: I0123 09:54:08.674903 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podUID="5656681c-2a30-47fa-bb61-f33660c36db0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:11 crc kubenswrapper[4982]: I0123 09:54:11.436296 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:54:11 crc kubenswrapper[4982]: I0123 09:54:11.436770 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:54:12 crc kubenswrapper[4982]: I0123 09:54:12.310080 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:12 crc kubenswrapper[4982]: I0123 09:54:12.310135 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:15 crc kubenswrapper[4982]: I0123 09:54:15.366760 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:15 crc kubenswrapper[4982]: I0123 09:54:15.367463 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:15 crc kubenswrapper[4982]: I0123 09:54:15.834346 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="90a4acf4-9689-4375-b94f-fad12c76e1d8" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.172:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:15 crc kubenswrapper[4982]: I0123 09:54:15.834506 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="90a4acf4-9689-4375-b94f-fad12c76e1d8" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.172:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:16 crc kubenswrapper[4982]: I0123 09:54:16.034550 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:16 crc kubenswrapper[4982]: I0123 09:54:16.034605 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:16 crc kubenswrapper[4982]: I0123 09:54:16.615789 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.56:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:16 crc kubenswrapper[4982]: I0123 09:54:16.615789 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.56:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:17 crc kubenswrapper[4982]: I0123 09:54:17.310222 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:17 crc kubenswrapper[4982]: I0123 09:54:17.311333 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:18 crc kubenswrapper[4982]: I0123 09:54:18.530950 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:18 crc kubenswrapper[4982]: I0123 09:54:18.631864 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podUID="5656681c-2a30-47fa-bb61-f33660c36db0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:19 crc kubenswrapper[4982]: I0123 09:54:19.407912 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" podUID="e57a3249-8bfe-49da-88fa-dbc887422c3c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:21 crc kubenswrapper[4982]: I0123 09:54:21.085972 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" podUID="3d5f4f8b-84b1-45e2-b393-254c24c090a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:21 crc kubenswrapper[4982]: I0123 09:54:21.085977 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" podUID="3d5f4f8b-84b1-45e2-b393-254c24c090a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:21 crc kubenswrapper[4982]: I0123 09:54:21.573859 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.56:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:21 crc kubenswrapper[4982]: E0123 09:54:21.583555 4982 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:22 crc kubenswrapper[4982]: I0123 09:54:22.310074 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:22 crc kubenswrapper[4982]: I0123 09:54:22.310240 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:22 crc kubenswrapper[4982]: I0123 09:54:22.858838 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:22 crc kubenswrapper[4982]: I0123 09:54:22.858887 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:22 crc kubenswrapper[4982]: I0123 09:54:22.859133 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:22 crc kubenswrapper[4982]: I0123 09:54:22.859155 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:23 crc kubenswrapper[4982]: I0123 09:54:23.294357 4982 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-8xw2q container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.79:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:23 crc kubenswrapper[4982]: I0123 09:54:23.294675 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-8xw2q" podUID="1b32be9a-afba-4a2c-b758-09f026b04f4d" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.79:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:25 crc kubenswrapper[4982]: I0123 09:54:25.368208 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:25 crc kubenswrapper[4982]: I0123 09:54:25.368275 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:25 crc kubenswrapper[4982]: I0123 09:54:25.833846 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="90a4acf4-9689-4375-b94f-fad12c76e1d8" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.172:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:25 crc kubenswrapper[4982]: I0123 09:54:25.833920 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="90a4acf4-9689-4375-b94f-fad12c76e1d8" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.172:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:26 crc kubenswrapper[4982]: I0123 09:54:26.035594 4982 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:26 crc kubenswrapper[4982]: I0123 09:54:26.035775 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:26 crc kubenswrapper[4982]: I0123 09:54:26.613850 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.56:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:26 crc kubenswrapper[4982]: I0123 09:54:26.613866 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.56:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:26 crc kubenswrapper[4982]: I0123 09:54:26.845901 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" podUID="b5cf4a40-effb-4d2e-8d9b-a7818151f189" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:26 crc kubenswrapper[4982]: I0123 09:54:26.845985 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" podUID="b5cf4a40-effb-4d2e-8d9b-a7818151f189" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:27 crc kubenswrapper[4982]: I0123 09:54:27.310489 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:27 crc kubenswrapper[4982]: I0123 09:54:27.310516 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.571862 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.571927 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" podUID="1be00a16-06c6-4fe5-9626-149c854680f8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.755815 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" podUID="5285fcd9-54a1-4499-8930-af660f95709a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.755868 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podUID="5656681c-2a30-47fa-bb61-f33660c36db0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.755823 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" podUID="5656681c-2a30-47fa-bb61-f33660c36db0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.756222 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" podUID="5285fcd9-54a1-4499-8930-af660f95709a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.886096 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" podUID="ae1bdd5c-8116-47e9-9344-f1d84794541c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:28 crc kubenswrapper[4982]: I0123 09:54:28.886394 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" podUID="ae1bdd5c-8116-47e9-9344-f1d84794541c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:29 crc kubenswrapper[4982]: I0123 09:54:29.091914 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" podUID="cb115f06-d81c-4b67-af3f-454ffd70d358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:29 crc kubenswrapper[4982]: I0123 09:54:29.091923 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" podUID="cb115f06-d81c-4b67-af3f-454ffd70d358" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:30 crc kubenswrapper[4982]: I0123 09:54:30.461861 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" podUID="801f503b-36aa-441d-9487-8fedc3365d46" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:31 crc kubenswrapper[4982]: I0123 09:54:31.044893 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" podUID="3d5f4f8b-84b1-45e2-b393-254c24c090a8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:31 crc kubenswrapper[4982]: I0123 09:54:31.572899 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" podUID="65f6cb92-3c6d-4e3c-b7af-734426f97f89" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.56:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:32 crc kubenswrapper[4982]: I0123 09:54:32.309839 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:32 crc kubenswrapper[4982]: I0123 09:54:32.859560 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:32 crc kubenswrapper[4982]: I0123 09:54:32.860036 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:32 crc kubenswrapper[4982]: I0123 09:54:32.900809 4982 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 09:54:32 crc kubenswrapper[4982]: I0123 09:54:32.900896 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:32 crc kubenswrapper[4982]: I0123 09:54:32.901328 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-bnwv8" podUID="3b6912da-3a32-4b32-89bf-ef8dc1a3fff7" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.43:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.282756 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1e221cae-f304-4f00-be53-5a72edbf3bd7" containerName="galera" probeResult="failure" output="command timed out" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.282880 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1e221cae-f304-4f00-be53-5a72edbf3bd7" containerName="galera" probeResult="failure" output="command timed out" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.286449 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-86cb77c54b-2m4pf" podUID="082e5308-ee9a-4c17-a4a6-637a6e17b684" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.58:9403/livez\": EOF" Jan 23 09:54:33 crc kubenswrapper[4982]: E0123 09:54:33.377442 4982 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="23.551s" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.449404 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.449443 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.449465 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.449486 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.449504 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.449512 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.449528 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.453651 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.453731 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" gracePeriod=600 Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.471878 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.471972 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.472010 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.475222 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus" containerStatusID={"Type":"cri-o","ID":"541469452be913385126ddf7b3bdb454fe361462b6bc4e097c85494c52ac27d9"} pod="openstack/prometheus-metric-storage-0" containerMessage="Container prometheus failed liveness probe, will be restarted" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.475387 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" containerID="cri-o://541469452be913385126ddf7b3bdb454fe361462b6bc4e097c85494c52ac27d9" gracePeriod=600 Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.589002 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-ss8hx" Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.595394 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-notification-agent" probeResult="failure" output=< Jan 23 09:54:33 crc kubenswrapper[4982]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 23 09:54:33 crc kubenswrapper[4982]: > Jan 23 09:54:33 crc kubenswrapper[4982]: I0123 09:54:33.595489 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:33.999808 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-5r942"] Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.092469 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-5r942"] Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.130745 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.315176 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": dial tcp 10.217.1.166:9090: connect: connection refused" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.885254 4982 generic.go:334] "Generic (PLEG): container finished" podID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerID="541469452be913385126ddf7b3bdb454fe361462b6bc4e097c85494c52ac27d9" exitCode=0 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.886530 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerDied","Data":"541469452be913385126ddf7b3bdb454fe361462b6bc4e097c85494c52ac27d9"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.897094 4982 generic.go:334] "Generic (PLEG): container finished" podID="5656681c-2a30-47fa-bb61-f33660c36db0" containerID="f27e8e6554e0615dffbb2d4b0bd356f8a070cef76d19e49845ff8569aa631720" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.897166 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" event={"ID":"5656681c-2a30-47fa-bb61-f33660c36db0","Type":"ContainerDied","Data":"f27e8e6554e0615dffbb2d4b0bd356f8a070cef76d19e49845ff8569aa631720"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.936968 4982 generic.go:334] "Generic (PLEG): container finished" podID="75a3ce99-813c-45c1-9b30-0bb0b47f26a6" containerID="2de6f1cd4bfc6b7c28f405915dfa31e90d52a2f370939b0050b33e048e45b579" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.937366 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" event={"ID":"75a3ce99-813c-45c1-9b30-0bb0b47f26a6","Type":"ContainerDied","Data":"2de6f1cd4bfc6b7c28f405915dfa31e90d52a2f370939b0050b33e048e45b579"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.964818 4982 generic.go:334] "Generic (PLEG): container finished" podID="b5cf4a40-effb-4d2e-8d9b-a7818151f189" containerID="e103d7ff88ac2055ccfe8a666fcca4f34daf708cbd3a86ecadc8a19e5a64765e" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.964890 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" event={"ID":"b5cf4a40-effb-4d2e-8d9b-a7818151f189","Type":"ContainerDied","Data":"e103d7ff88ac2055ccfe8a666fcca4f34daf708cbd3a86ecadc8a19e5a64765e"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.974601 4982 generic.go:334] "Generic (PLEG): container finished" podID="6f86042f-a06a-4756-96ae-19de64621eb4" containerID="960ef2c3a886e966eaf15d62911feaab916222c6a76c7db33ddb9aefcbafda49" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.974719 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" event={"ID":"6f86042f-a06a-4756-96ae-19de64621eb4","Type":"ContainerDied","Data":"960ef2c3a886e966eaf15d62911feaab916222c6a76c7db33ddb9aefcbafda49"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.994255 4982 generic.go:334] "Generic (PLEG): container finished" podID="3d5f4f8b-84b1-45e2-b393-254c24c090a8" containerID="db96beddac2c035d87f497ee12c41625cd44ebc5c15b9b4a958cd153d6dbd054" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:34.994450 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" event={"ID":"3d5f4f8b-84b1-45e2-b393-254c24c090a8","Type":"ContainerDied","Data":"db96beddac2c035d87f497ee12c41625cd44ebc5c15b9b4a958cd153d6dbd054"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.005729 4982 generic.go:334] "Generic (PLEG): container finished" podID="2502f90a-1eb4-4fca-ae97-f9624f9caa2c" containerID="34b006a929b40c990bb3515900d72d84fcc8c21077438f35d0102a99aafc6525" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.005833 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" event={"ID":"2502f90a-1eb4-4fca-ae97-f9624f9caa2c","Type":"ContainerDied","Data":"34b006a929b40c990bb3515900d72d84fcc8c21077438f35d0102a99aafc6525"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.044317 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" exitCode=0 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.044380 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.044411 4982 scope.go:117] "RemoveContainer" containerID="125784f8292abf2b5b21751e74054cc18284f00b2963365788c2199c0e04e59f" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.046599 4982 scope.go:117] "RemoveContainer" containerID="f27e8e6554e0615dffbb2d4b0bd356f8a070cef76d19e49845ff8569aa631720" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.047213 4982 scope.go:117] "RemoveContainer" containerID="e103d7ff88ac2055ccfe8a666fcca4f34daf708cbd3a86ecadc8a19e5a64765e" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.049856 4982 scope.go:117] "RemoveContainer" containerID="2de6f1cd4bfc6b7c28f405915dfa31e90d52a2f370939b0050b33e048e45b579" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.052779 4982 generic.go:334] "Generic (PLEG): container finished" podID="5285fcd9-54a1-4499-8930-af660f95709a" containerID="da3faf8132437e88a816a27305df57306ff28535446c1a3029a9d39650250fa3" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.052862 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" event={"ID":"5285fcd9-54a1-4499-8930-af660f95709a","Type":"ContainerDied","Data":"da3faf8132437e88a816a27305df57306ff28535446c1a3029a9d39650250fa3"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.066171 4982 generic.go:334] "Generic (PLEG): container finished" podID="efce0465-d792-44d6-b0c6-48f3e74ccd7a" containerID="ecf97bf1bac39b955ab576a6e10f77f3a3a665795334c4ca1ab24c4461a9a98f" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.066270 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" event={"ID":"efce0465-d792-44d6-b0c6-48f3e74ccd7a","Type":"ContainerDied","Data":"ecf97bf1bac39b955ab576a6e10f77f3a3a665795334c4ca1ab24c4461a9a98f"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.092214 4982 generic.go:334] "Generic (PLEG): container finished" podID="ae1bdd5c-8116-47e9-9344-f1d84794541c" containerID="a8db99c5ff2a434069da212522ae81671be16c37a83057df980b0a2fafc51e3f" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.092301 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" event={"ID":"ae1bdd5c-8116-47e9-9344-f1d84794541c","Type":"ContainerDied","Data":"a8db99c5ff2a434069da212522ae81671be16c37a83057df980b0a2fafc51e3f"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.098079 4982 scope.go:117] "RemoveContainer" containerID="960ef2c3a886e966eaf15d62911feaab916222c6a76c7db33ddb9aefcbafda49" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.098559 4982 scope.go:117] "RemoveContainer" containerID="34b006a929b40c990bb3515900d72d84fcc8c21077438f35d0102a99aafc6525" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.098827 4982 scope.go:117] "RemoveContainer" containerID="db96beddac2c035d87f497ee12c41625cd44ebc5c15b9b4a958cd153d6dbd054" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.107133 4982 generic.go:334] "Generic (PLEG): container finished" podID="801f503b-36aa-441d-9487-8fedc3365d46" containerID="db81386d2122947b33f879d31c408b79aee6c4fa677223397b0be2da2e761c46" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.107263 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" event={"ID":"801f503b-36aa-441d-9487-8fedc3365d46","Type":"ContainerDied","Data":"db81386d2122947b33f879d31c408b79aee6c4fa677223397b0be2da2e761c46"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.121859 4982 generic.go:334] "Generic (PLEG): container finished" podID="34541257-bf45-41a5-a756-393b32053416" containerID="94b99109dafbff61442584be6d7dea03989a28dcf961f0ef8b583b62f6f8ad4a" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.121940 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" event={"ID":"34541257-bf45-41a5-a756-393b32053416","Type":"ContainerDied","Data":"94b99109dafbff61442584be6d7dea03989a28dcf961f0ef8b583b62f6f8ad4a"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.126208 4982 generic.go:334] "Generic (PLEG): container finished" podID="cb115f06-d81c-4b67-af3f-454ffd70d358" containerID="eef3a39fc253ba024fd9776db7db13d69b07e067311447d805974100648e444a" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.126310 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" event={"ID":"cb115f06-d81c-4b67-af3f-454ffd70d358","Type":"ContainerDied","Data":"eef3a39fc253ba024fd9776db7db13d69b07e067311447d805974100648e444a"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.143400 4982 generic.go:334] "Generic (PLEG): container finished" podID="d6169561-3b43-4dcd-8fa6-f08d484bccdf" containerID="c24c8f6f5b2a928939a24c445396c0c24594d0771c6d8714fb5985f743b2bc61" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.143508 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" event={"ID":"d6169561-3b43-4dcd-8fa6-f08d484bccdf","Type":"ContainerDied","Data":"c24c8f6f5b2a928939a24c445396c0c24594d0771c6d8714fb5985f743b2bc61"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.144380 4982 scope.go:117] "RemoveContainer" containerID="c24c8f6f5b2a928939a24c445396c0c24594d0771c6d8714fb5985f743b2bc61" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.158399 4982 generic.go:334] "Generic (PLEG): container finished" podID="16b1155b-baa1-467b-947f-36673d41a1a0" containerID="0288c7c23107b9a1b8c6882c416553bc8c31170049feaa4b52d9c97974df4149" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.158477 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" event={"ID":"16b1155b-baa1-467b-947f-36673d41a1a0","Type":"ContainerDied","Data":"0288c7c23107b9a1b8c6882c416553bc8c31170049feaa4b52d9c97974df4149"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.160521 4982 scope.go:117] "RemoveContainer" containerID="da3faf8132437e88a816a27305df57306ff28535446c1a3029a9d39650250fa3" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.160911 4982 scope.go:117] "RemoveContainer" containerID="ecf97bf1bac39b955ab576a6e10f77f3a3a665795334c4ca1ab24c4461a9a98f" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.161330 4982 generic.go:334] "Generic (PLEG): container finished" podID="9695f150-036e-4e4b-a857-7af97f1576e5" containerID="16645ed897231cac65f1c6458ea400c2e4f37c6cfcbf6ea20c16e3910bd54665" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.161370 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" event={"ID":"9695f150-036e-4e4b-a857-7af97f1576e5","Type":"ContainerDied","Data":"16645ed897231cac65f1c6458ea400c2e4f37c6cfcbf6ea20c16e3910bd54665"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.162681 4982 scope.go:117] "RemoveContainer" containerID="a8db99c5ff2a434069da212522ae81671be16c37a83057df980b0a2fafc51e3f" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.169365 4982 generic.go:334] "Generic (PLEG): container finished" podID="a73a3899-9dfc-404d-b7cd-7eaf32de406a" containerID="bd56a65ae1110d9611c5cbc82ee818907c8e2dc8ed167c776b52aa696cb3fd7d" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.169454 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" event={"ID":"a73a3899-9dfc-404d-b7cd-7eaf32de406a","Type":"ContainerDied","Data":"bd56a65ae1110d9611c5cbc82ee818907c8e2dc8ed167c776b52aa696cb3fd7d"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.171807 4982 generic.go:334] "Generic (PLEG): container finished" podID="96c1e2e5-87b0-4341-83cd-5b623de95215" containerID="ab0aad75ebc90beb32455ba6de75406bcb427e96c10e149ce22e9b6f3e6104d7" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.171858 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" event={"ID":"96c1e2e5-87b0-4341-83cd-5b623de95215","Type":"ContainerDied","Data":"ab0aad75ebc90beb32455ba6de75406bcb427e96c10e149ce22e9b6f3e6104d7"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.190785 4982 generic.go:334] "Generic (PLEG): container finished" podID="564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c" containerID="db28aaa642def327de785e1eeadefcc3ddb57db6f55f6315a4de92785cb8783c" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.190934 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" event={"ID":"564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c","Type":"ContainerDied","Data":"db28aaa642def327de785e1eeadefcc3ddb57db6f55f6315a4de92785cb8783c"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.194486 4982 generic.go:334] "Generic (PLEG): container finished" podID="0167eb4d-3772-402e-8451-69789e17dd5e" containerID="f0760ba7e571dcdd0924f2284e78fe2aadc6da1c9702d7066985346aa0a2c0b6" exitCode=1 Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.194612 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" event={"ID":"0167eb4d-3772-402e-8451-69789e17dd5e","Type":"ContainerDied","Data":"f0760ba7e571dcdd0924f2284e78fe2aadc6da1c9702d7066985346aa0a2c0b6"} Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.200606 4982 scope.go:117] "RemoveContainer" containerID="db81386d2122947b33f879d31c408b79aee6c4fa677223397b0be2da2e761c46" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.211536 4982 scope.go:117] "RemoveContainer" containerID="94b99109dafbff61442584be6d7dea03989a28dcf961f0ef8b583b62f6f8ad4a" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.212304 4982 scope.go:117] "RemoveContainer" containerID="eef3a39fc253ba024fd9776db7db13d69b07e067311447d805974100648e444a" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.262712 4982 scope.go:117] "RemoveContainer" containerID="16645ed897231cac65f1c6458ea400c2e4f37c6cfcbf6ea20c16e3910bd54665" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.268748 4982 scope.go:117] "RemoveContainer" containerID="0288c7c23107b9a1b8c6882c416553bc8c31170049feaa4b52d9c97974df4149" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.269974 4982 scope.go:117] "RemoveContainer" containerID="bd56a65ae1110d9611c5cbc82ee818907c8e2dc8ed167c776b52aa696cb3fd7d" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.316198 4982 scope.go:117] "RemoveContainer" containerID="db28aaa642def327de785e1eeadefcc3ddb57db6f55f6315a4de92785cb8783c" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.317433 4982 scope.go:117] "RemoveContainer" containerID="f0760ba7e571dcdd0924f2284e78fe2aadc6da1c9702d7066985346aa0a2c0b6" Jan 23 09:54:35 crc kubenswrapper[4982]: I0123 09:54:35.341425 4982 scope.go:117] "RemoveContainer" containerID="ab0aad75ebc90beb32455ba6de75406bcb427e96c10e149ce22e9b6f3e6104d7" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:35.776534 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:35.982765 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f4ee61-f493-4560-91a7-229558683ade" path="/var/lib/kubelet/pods/75f4ee61-f493-4560-91a7-229558683ade/volumes" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.242382 4982 generic.go:334] "Generic (PLEG): container finished" podID="91102982-af50-41cf-9969-28c92bdf8f36" containerID="6c656c838f43d3b5711e71c1a34a758f7a7ef7e0dda96f4a8b1dec98fc7fc1c3" exitCode=1 Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.242549 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" event={"ID":"91102982-af50-41cf-9969-28c92bdf8f36","Type":"ContainerDied","Data":"6c656c838f43d3b5711e71c1a34a758f7a7ef7e0dda96f4a8b1dec98fc7fc1c3"} Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.243583 4982 scope.go:117] "RemoveContainer" containerID="6c656c838f43d3b5711e71c1a34a758f7a7ef7e0dda96f4a8b1dec98fc7fc1c3" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.267307 4982 generic.go:334] "Generic (PLEG): container finished" podID="1be00a16-06c6-4fe5-9626-149c854680f8" containerID="b21b69f7dab970c6843c5d9d0bb498c096d274449c558fb9fd5e2e6650d2bd0d" exitCode=1 Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.267389 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" event={"ID":"1be00a16-06c6-4fe5-9626-149c854680f8","Type":"ContainerDied","Data":"b21b69f7dab970c6843c5d9d0bb498c096d274449c558fb9fd5e2e6650d2bd0d"} Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.268100 4982 scope.go:117] "RemoveContainer" containerID="b21b69f7dab970c6843c5d9d0bb498c096d274449c558fb9fd5e2e6650d2bd0d" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.349033 4982 generic.go:334] "Generic (PLEG): container finished" podID="0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1" containerID="773a39599329a6de9ea462c8d0aaa8571a9eae39cd277d4ad6bd3a46cb1dbf16" exitCode=1 Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.349129 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" event={"ID":"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1","Type":"ContainerDied","Data":"773a39599329a6de9ea462c8d0aaa8571a9eae39cd277d4ad6bd3a46cb1dbf16"} Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.349939 4982 scope.go:117] "RemoveContainer" containerID="773a39599329a6de9ea462c8d0aaa8571a9eae39cd277d4ad6bd3a46cb1dbf16" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.402090 4982 generic.go:334] "Generic (PLEG): container finished" podID="854d661c-b4ec-4046-85d3-3e643336d93c" containerID="472854d5b241b0ce1bd053b1d9e79f97f15abe6c479daafa6741f442f997320b" exitCode=1 Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.402235 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" event={"ID":"854d661c-b4ec-4046-85d3-3e643336d93c","Type":"ContainerDied","Data":"472854d5b241b0ce1bd053b1d9e79f97f15abe6c479daafa6741f442f997320b"} Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.403449 4982 scope.go:117] "RemoveContainer" containerID="472854d5b241b0ce1bd053b1d9e79f97f15abe6c479daafa6741f442f997320b" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.432001 4982 generic.go:334] "Generic (PLEG): container finished" podID="e57a3249-8bfe-49da-88fa-dbc887422c3c" containerID="8b6c2090d6b4c49aef4b7678cc35a9368dc1fdb97a3e8dadc3f3977a17fbac95" exitCode=1 Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.432095 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" event={"ID":"e57a3249-8bfe-49da-88fa-dbc887422c3c","Type":"ContainerDied","Data":"8b6c2090d6b4c49aef4b7678cc35a9368dc1fdb97a3e8dadc3f3977a17fbac95"} Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.433665 4982 scope.go:117] "RemoveContainer" containerID="8b6c2090d6b4c49aef4b7678cc35a9368dc1fdb97a3e8dadc3f3977a17fbac95" Jan 23 09:54:36 crc kubenswrapper[4982]: E0123 09:54:36.434597 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.455766 4982 generic.go:334] "Generic (PLEG): container finished" podID="a00a88ad-d184-4394-9231-4da810afad02" containerID="0b14f824a8767c34446eb86fafdee61698bc38081e3c7c08246a56a30ee93d2d" exitCode=1 Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.455810 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" event={"ID":"a00a88ad-d184-4394-9231-4da810afad02","Type":"ContainerDied","Data":"0b14f824a8767c34446eb86fafdee61698bc38081e3c7c08246a56a30ee93d2d"} Jan 23 09:54:36 crc kubenswrapper[4982]: I0123 09:54:36.456723 4982 scope.go:117] "RemoveContainer" containerID="0b14f824a8767c34446eb86fafdee61698bc38081e3c7c08246a56a30ee93d2d" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.460274 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.471053 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:54:37 crc kubenswrapper[4982]: E0123 09:54:37.471461 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.489001 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.500095 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.531575 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.592189 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.605757 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.708893 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.804106 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 09:54:37 crc kubenswrapper[4982]: I0123 09:54:37.840978 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.009539 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.049340 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.062683 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.088532 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.233550 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.299384 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.365543 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.404668 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.454090 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 09:54:38 crc kubenswrapper[4982]: I0123 09:54:38.492990 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.311418 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": dial tcp 10.217.1.166:9090: connect: connection refused" Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.426661 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.427066 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.500830 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" event={"ID":"6f86042f-a06a-4756-96ae-19de64621eb4","Type":"ContainerStarted","Data":"7ee60b773ff13ced94966e17d8f3a2c5fd11ac259d7f34b5a4aaa9c52cfb257a"} Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.503400 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" event={"ID":"a73a3899-9dfc-404d-b7cd-7eaf32de406a","Type":"ContainerStarted","Data":"81880fcadf072c80a1c66f7a91d37aa21cca3fe525697fa2ec8d97e778b37696"} Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.505444 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" event={"ID":"cb115f06-d81c-4b67-af3f-454ffd70d358","Type":"ContainerStarted","Data":"bc13573a0be3194dc19efc766814c5081a435cc157d31d8647be6cde477fa014"} Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.507064 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" event={"ID":"5656681c-2a30-47fa-bb61-f33660c36db0","Type":"ContainerStarted","Data":"a51d8c1e5246e0d044eb59a75d5c7fde189ea3f6df68deadf8bfeb0755c7feca"} Jan 23 09:54:39 crc kubenswrapper[4982]: I0123 09:54:39.511752 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" event={"ID":"854d661c-b4ec-4046-85d3-3e643336d93c","Type":"ContainerStarted","Data":"969d6a89be5ba4ae547060d95631ee07a489913d43b86df7b99629cecf5b5fcb"} Jan 23 09:54:40 crc kubenswrapper[4982]: I0123 09:54:40.004125 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 09:54:40 crc kubenswrapper[4982]: I0123 09:54:40.004185 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 09:54:40 crc kubenswrapper[4982]: I0123 09:54:40.311810 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 09:54:40 crc kubenswrapper[4982]: I0123 09:54:40.311855 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 09:54:40 crc kubenswrapper[4982]: I0123 09:54:40.959775 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 09:54:44 crc kubenswrapper[4982]: I0123 09:54:44.310575 4982 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="06c55dbd-0f43-4959-9035-a1d07894d0bb" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.166:9090/-/ready\": dial tcp 10.217.1.166:9090: connect: connection refused" Jan 23 09:54:44 crc kubenswrapper[4982]: I0123 09:54:44.313061 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:44 crc kubenswrapper[4982]: I0123 09:54:44.603241 4982 generic.go:334] "Generic (PLEG): container finished" podID="e6373cfd-07f7-414a-b88f-b928c187d201" containerID="8582593d320eb1b6aefbbb745ffb294efe9145049bb2ac22b666a4f7f375008a" exitCode=137 Jan 23 09:54:44 crc kubenswrapper[4982]: I0123 09:54:44.603323 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerDied","Data":"8582593d320eb1b6aefbbb745ffb294efe9145049bb2ac22b666a4f7f375008a"} Jan 23 09:54:45 crc kubenswrapper[4982]: I0123 09:54:45.763931 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.460650 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.489163 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.500412 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.531996 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.591517 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.602887 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.708834 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.803645 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 09:54:47 crc kubenswrapper[4982]: I0123 09:54:47.841524 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.010467 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.048769 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.062798 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.088571 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.234561 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.299830 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.368826 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.404697 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.454918 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.492911 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.624573 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.702277 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" event={"ID":"5285fcd9-54a1-4499-8930-af660f95709a","Type":"ContainerStarted","Data":"4d40075d8325eba74b2992ae5eec56281c722e3fff2ffea273187bdb0b27f14d"} Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.702390 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.709121 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.721698 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.725600 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" event={"ID":"d6169561-3b43-4dcd-8fa6-f08d484bccdf","Type":"ContainerStarted","Data":"c86f6d91fabd33ae970c630e2b58e939a4098841f7e910eb005c56f7570b75c5"} Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.727116 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.740154 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.745864 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.754213 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.771533 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.781163 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" event={"ID":"801f503b-36aa-441d-9487-8fedc3365d46","Type":"ContainerStarted","Data":"4e629f227631ce9da15ab7848036ff7b86e30ab8619dc013b2aca2deca7bda2f"} Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.782571 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.795651 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.801977 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" event={"ID":"ae1bdd5c-8116-47e9-9344-f1d84794541c","Type":"ContainerStarted","Data":"b71eb204c421f5d45b5cfb5fd405f9dc98cc57881ac753e550d6cb18b28adee8"} Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.803770 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.813996 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.820311 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.828287 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:54:48 crc kubenswrapper[4982]: E0123 09:54:48.828658 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.838235 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.849060 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" event={"ID":"0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1","Type":"ContainerStarted","Data":"1ec29a973f726662406e60c5964f5f2ee17af20f8c33f63c9bf9876c76504d6d"} Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.850579 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.856243 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.856293 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.860264 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-5xrcb" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.862513 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-lcpcl" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.911323 4982 scope.go:117] "RemoveContainer" containerID="e80fd41ea97677ad5986663394776db9499cdfc6c98155f55f5e5fd42e3ed4af" Jan 23 09:54:48 crc kubenswrapper[4982]: I0123 09:54:48.974205 4982 scope.go:117] "RemoveContainer" containerID="61352dfbba815769d0ed9888825a809a01ef0ae7e2bacb0d731e94c7d3569c4a" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.068433 4982 scope.go:117] "RemoveContainer" containerID="abd9b2b03dafba2dc3a39a20cc3dfb03197779569c0a7f39432f861b9751cdaa" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.315086 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.868663 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" event={"ID":"75a3ce99-813c-45c1-9b30-0bb0b47f26a6","Type":"ContainerStarted","Data":"07402a19b4fec15eabe7f88bdb974e1244161379801b439f3aaf6f3636a5f3f7"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.869489 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.871328 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" event={"ID":"564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c","Type":"ContainerStarted","Data":"d5c46a4e86a9134c13db5712fe20436e50006880283761bb6dc6ecf8d7f574d5"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.873704 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" event={"ID":"0167eb4d-3772-402e-8451-69789e17dd5e","Type":"ContainerStarted","Data":"29c365f6e93273fb18c8d7d83f93e1aa6623553f71c513ae002e234e9aa0441b"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.873789 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.876526 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" event={"ID":"a00a88ad-d184-4394-9231-4da810afad02","Type":"ContainerStarted","Data":"d33c58ebd54c3231a157b6d81fe3f524ca2484080cfd29c0d549081fffb22f94"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.879952 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" event={"ID":"b5cf4a40-effb-4d2e-8d9b-a7818151f189","Type":"ContainerStarted","Data":"e676351c13f7d1c1e2ffa0aa6b5e7cb6337c1d878386bb328afc7c47d3656406"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.884608 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" event={"ID":"3d5f4f8b-84b1-45e2-b393-254c24c090a8","Type":"ContainerStarted","Data":"0f65ba62b6a2d5f101d2983d089946295d22948c0f1a7c94b70c882be4efea68"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.888547 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" event={"ID":"1be00a16-06c6-4fe5-9626-149c854680f8","Type":"ContainerStarted","Data":"baa5e93c45685de592630f6938cadad413211ca1b52d9b74bd1df89adf5b1bda"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.893727 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerStarted","Data":"3a791025d7716ff30be0b48e3e37f78c8ab1bf8c2cd2e9f7bc76d8af457c4876"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.894443 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"b4f0237782f8735049b9fb2b6e770bbb1565cfe33e8f143cfdf7d63571b30eb3"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.894523 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e6373cfd-07f7-414a-b88f-b928c187d201" containerName="ceilometer-notification-agent" containerID="cri-o://b4f0237782f8735049b9fb2b6e770bbb1565cfe33e8f143cfdf7d63571b30eb3" gracePeriod=30 Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.899986 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" event={"ID":"91102982-af50-41cf-9969-28c92bdf8f36","Type":"ContainerStarted","Data":"fc3cc31bff2b906af8be052077e8e9c35230b8059b47e9c570cc1a611181cd67"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.901667 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.908837 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nc6w7" event={"ID":"96c1e2e5-87b0-4341-83cd-5b623de95215","Type":"ContainerStarted","Data":"9878ed4b5f2f814aafc700be4be3e6c8403c34489c83a46757744c2004f5502a"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.919735 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" event={"ID":"34541257-bf45-41a5-a756-393b32053416","Type":"ContainerStarted","Data":"acafc05dffea2f844661b0adcea2f67f9b9f73c944c2ec83c59095ec7b62b1b1"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.925995 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" event={"ID":"9695f150-036e-4e4b-a857-7af97f1576e5","Type":"ContainerStarted","Data":"3fe2cb530c18eef4498eac141b2030bdb3996ae02c91fac0f2e96c67479a52c7"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.932464 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" event={"ID":"2502f90a-1eb4-4fca-ae97-f9624f9caa2c","Type":"ContainerStarted","Data":"1da0469ede3426d4ec278c1e0b2bbcda2317533facefaa1f26ef792282d98fb0"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.932731 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.936486 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"06c55dbd-0f43-4959-9035-a1d07894d0bb","Type":"ContainerStarted","Data":"d880fde5486019ec22d685692434841d430f654eb15e395adc475dbec8a7ce15"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.943961 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" event={"ID":"16b1155b-baa1-467b-947f-36673d41a1a0","Type":"ContainerStarted","Data":"fb84a036c0a2fc9e3af4f34cdb02cef4d69e058a8079baa1365b240b41f70ca4"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.947657 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" event={"ID":"e57a3249-8bfe-49da-88fa-dbc887422c3c","Type":"ContainerStarted","Data":"f4f1040d2c5d2e0dd96b5fab2f1fd869245203e1f169b567cfe880b7bde1ade5"} Jan 23 09:54:49 crc kubenswrapper[4982]: I0123 09:54:49.952202 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" event={"ID":"efce0465-d792-44d6-b0c6-48f3e74ccd7a","Type":"ContainerStarted","Data":"2009b85822a589b8c189055ad5bde3ac38ef5151a4c2f76a5d27530779b805b9"} Jan 23 09:54:54 crc kubenswrapper[4982]: I0123 09:54:54.309411 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:54 crc kubenswrapper[4982]: I0123 09:54:54.315438 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:55 crc kubenswrapper[4982]: I0123 09:54:55.027452 4982 generic.go:334] "Generic (PLEG): container finished" podID="e6373cfd-07f7-414a-b88f-b928c187d201" containerID="b4f0237782f8735049b9fb2b6e770bbb1565cfe33e8f143cfdf7d63571b30eb3" exitCode=0 Jan 23 09:54:55 crc kubenswrapper[4982]: I0123 09:54:55.027534 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerDied","Data":"b4f0237782f8735049b9fb2b6e770bbb1565cfe33e8f143cfdf7d63571b30eb3"} Jan 23 09:54:55 crc kubenswrapper[4982]: I0123 09:54:55.035416 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 23 09:54:55 crc kubenswrapper[4982]: I0123 09:54:55.765707 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-85bfd44c94-wq88d" Jan 23 09:54:56 crc kubenswrapper[4982]: I0123 09:54:56.050130 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e6373cfd-07f7-414a-b88f-b928c187d201","Type":"ContainerStarted","Data":"590f4e70cad0bdbb2abe7adcd4dd00059fac47cbf8e2898d423287644b353a0c"} Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.462230 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-jhcmh" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.491105 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-8rr6c" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.502691 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-pm6mp" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.539655 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-glq8j" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.592569 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.594167 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-gvwjl" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.605420 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-44mmv" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.709036 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.712318 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-tt4pp" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.811159 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-4nz5s" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.848798 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 09:54:57 crc kubenswrapper[4982]: I0123 09:54:57.848879 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-hfrk2" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.077896 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwfjx" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.107889 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-q4hlv" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.241093 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-r2wx4" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.305828 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-mwfhl" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.370875 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-lqf89" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.412166 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-dddts" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.474490 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-p6jkk" Jan 23 09:54:58 crc kubenswrapper[4982]: I0123 09:54:58.501476 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-ptg2h" Jan 23 09:54:59 crc kubenswrapper[4982]: I0123 09:54:59.426953 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-gvxjc" Jan 23 09:55:00 crc kubenswrapper[4982]: I0123 09:55:00.010583 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn" Jan 23 09:55:00 crc kubenswrapper[4982]: I0123 09:55:00.319449 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57c46955cf-9c26t" Jan 23 09:55:03 crc kubenswrapper[4982]: I0123 09:55:03.827977 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:55:03 crc kubenswrapper[4982]: E0123 09:55:03.828888 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:55:18 crc kubenswrapper[4982]: I0123 09:55:18.828166 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:55:18 crc kubenswrapper[4982]: E0123 09:55:18.829360 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:55:20 crc kubenswrapper[4982]: I0123 09:55:20.961493 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6d4d669f8c-c6qg5" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.199718 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d58d5/must-gather-pzp92"] Jan 23 09:55:30 crc kubenswrapper[4982]: E0123 09:55:30.223471 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="extract-utilities" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.223486 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="extract-utilities" Jan 23 09:55:30 crc kubenswrapper[4982]: E0123 09:55:30.223534 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="registry-server" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.223540 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="registry-server" Jan 23 09:55:30 crc kubenswrapper[4982]: E0123 09:55:30.223573 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="registry-server" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.223616 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="registry-server" Jan 23 09:55:30 crc kubenswrapper[4982]: E0123 09:55:30.223655 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="extract-content" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.223662 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="extract-content" Jan 23 09:55:30 crc kubenswrapper[4982]: E0123 09:55:30.223694 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="extract-utilities" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.223700 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="extract-utilities" Jan 23 09:55:30 crc kubenswrapper[4982]: E0123 09:55:30.223715 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="extract-content" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.223721 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="extract-content" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.224007 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6066ca82-cbbf-4b29-9ce3-cbb6b4739c11" containerName="registry-server" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.224031 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="af28be07-684e-496b-a1c0-8399a6397586" containerName="registry-server" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.225304 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.227403 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-d58d5"/"default-dockercfg-8gdt6" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.230517 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d58d5"/"openshift-service-ca.crt" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.230531 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d58d5"/"kube-root-ca.crt" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.265816 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d58d5/must-gather-pzp92"] Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.309060 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aab87043-845d-4863-9f94-3b7d67fc113a-must-gather-output\") pod \"must-gather-pzp92\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.309171 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2cp\" (UniqueName: \"kubernetes.io/projected/aab87043-845d-4863-9f94-3b7d67fc113a-kube-api-access-hr2cp\") pod \"must-gather-pzp92\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.449570 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2cp\" (UniqueName: \"kubernetes.io/projected/aab87043-845d-4863-9f94-3b7d67fc113a-kube-api-access-hr2cp\") pod \"must-gather-pzp92\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.470035 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aab87043-845d-4863-9f94-3b7d67fc113a-must-gather-output\") pod \"must-gather-pzp92\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.472600 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aab87043-845d-4863-9f94-3b7d67fc113a-must-gather-output\") pod \"must-gather-pzp92\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.515088 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2cp\" (UniqueName: \"kubernetes.io/projected/aab87043-845d-4863-9f94-3b7d67fc113a-kube-api-access-hr2cp\") pod \"must-gather-pzp92\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:30 crc kubenswrapper[4982]: I0123 09:55:30.578658 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 09:55:31 crc kubenswrapper[4982]: I0123 09:55:31.321607 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d58d5/must-gather-pzp92"] Jan 23 09:55:31 crc kubenswrapper[4982]: W0123 09:55:31.344235 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaab87043_845d_4863_9f94_3b7d67fc113a.slice/crio-4c5df6aa8b526001aa2ade1a50f356ac86b921be03e5477b41e59729d121c0a8 WatchSource:0}: Error finding container 4c5df6aa8b526001aa2ade1a50f356ac86b921be03e5477b41e59729d121c0a8: Status 404 returned error can't find the container with id 4c5df6aa8b526001aa2ade1a50f356ac86b921be03e5477b41e59729d121c0a8 Jan 23 09:55:31 crc kubenswrapper[4982]: I0123 09:55:31.551516 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/must-gather-pzp92" event={"ID":"aab87043-845d-4863-9f94-3b7d67fc113a","Type":"ContainerStarted","Data":"4c5df6aa8b526001aa2ade1a50f356ac86b921be03e5477b41e59729d121c0a8"} Jan 23 09:55:33 crc kubenswrapper[4982]: I0123 09:55:33.829371 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:55:33 crc kubenswrapper[4982]: E0123 09:55:33.830228 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:55:41 crc kubenswrapper[4982]: I0123 09:55:41.681139 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/must-gather-pzp92" event={"ID":"aab87043-845d-4863-9f94-3b7d67fc113a","Type":"ContainerStarted","Data":"b06d87839d2262015a9072ef30b5b61f3c347d5697db733c52c53f0c214f5781"} Jan 23 09:55:42 crc kubenswrapper[4982]: I0123 09:55:42.691760 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/must-gather-pzp92" event={"ID":"aab87043-845d-4863-9f94-3b7d67fc113a","Type":"ContainerStarted","Data":"273efca37cd1e401cd5cfbc847587adeee00be58e75c4f570ed27c7924c56558"} Jan 23 09:55:43 crc kubenswrapper[4982]: I0123 09:55:43.726781 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d58d5/must-gather-pzp92" podStartSLOduration=3.960817332 podStartE2EDuration="13.726757667s" podCreationTimestamp="2026-01-23 09:55:30 +0000 UTC" firstStartedPulling="2026-01-23 09:55:31.347933298 +0000 UTC m=+7205.828296684" lastFinishedPulling="2026-01-23 09:55:41.113873633 +0000 UTC m=+7215.594237019" observedRunningTime="2026-01-23 09:55:43.723060027 +0000 UTC m=+7218.203423423" watchObservedRunningTime="2026-01-23 09:55:43.726757667 +0000 UTC m=+7218.207121103" Jan 23 09:55:44 crc kubenswrapper[4982]: I0123 09:55:44.828379 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:55:44 crc kubenswrapper[4982]: E0123 09:55:44.829013 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:55:46 crc kubenswrapper[4982]: I0123 09:55:46.797250 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d58d5/crc-debug-mwm69"] Jan 23 09:55:46 crc kubenswrapper[4982]: I0123 09:55:46.799440 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:46 crc kubenswrapper[4982]: I0123 09:55:46.924699 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q544\" (UniqueName: \"kubernetes.io/projected/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-kube-api-access-4q544\") pod \"crc-debug-mwm69\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:46 crc kubenswrapper[4982]: I0123 09:55:46.925039 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-host\") pod \"crc-debug-mwm69\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:47 crc kubenswrapper[4982]: I0123 09:55:47.027722 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q544\" (UniqueName: \"kubernetes.io/projected/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-kube-api-access-4q544\") pod \"crc-debug-mwm69\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:47 crc kubenswrapper[4982]: I0123 09:55:47.027929 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-host\") pod \"crc-debug-mwm69\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:47 crc kubenswrapper[4982]: I0123 09:55:47.028015 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-host\") pod \"crc-debug-mwm69\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:47 crc kubenswrapper[4982]: I0123 09:55:47.050894 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q544\" (UniqueName: \"kubernetes.io/projected/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-kube-api-access-4q544\") pod \"crc-debug-mwm69\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:47 crc kubenswrapper[4982]: I0123 09:55:47.125937 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:55:47 crc kubenswrapper[4982]: W0123 09:55:47.206484 4982 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf185f8c0_670f_4ea5_a8c0_bc0c7ddaff09.slice/crio-67890803151bfd207b5bafb19e8c11dee8f0c8c0f8a37aba8f336fd9124319ff WatchSource:0}: Error finding container 67890803151bfd207b5bafb19e8c11dee8f0c8c0f8a37aba8f336fd9124319ff: Status 404 returned error can't find the container with id 67890803151bfd207b5bafb19e8c11dee8f0c8c0f8a37aba8f336fd9124319ff Jan 23 09:55:47 crc kubenswrapper[4982]: I0123 09:55:47.763158 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/crc-debug-mwm69" event={"ID":"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09","Type":"ContainerStarted","Data":"67890803151bfd207b5bafb19e8c11dee8f0c8c0f8a37aba8f336fd9124319ff"} Jan 23 09:55:58 crc kubenswrapper[4982]: I0123 09:55:58.830963 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:55:58 crc kubenswrapper[4982]: E0123 09:55:58.831918 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:56:02 crc kubenswrapper[4982]: E0123 09:56:02.264054 4982 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 23 09:56:02 crc kubenswrapper[4982]: E0123 09:56:02.265485 4982 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q544,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-mwm69_openshift-must-gather-d58d5(f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 09:56:02 crc kubenswrapper[4982]: E0123 09:56:02.266707 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-d58d5/crc-debug-mwm69" podUID="f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" Jan 23 09:56:02 crc kubenswrapper[4982]: E0123 09:56:02.519679 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-d58d5/crc-debug-mwm69" podUID="f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" Jan 23 09:56:13 crc kubenswrapper[4982]: I0123 09:56:13.827928 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:56:13 crc kubenswrapper[4982]: E0123 09:56:13.828794 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:56:15 crc kubenswrapper[4982]: I0123 09:56:15.642565 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/crc-debug-mwm69" event={"ID":"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09","Type":"ContainerStarted","Data":"fab58a9a3c182b6a91452857d6b7bc05e2e2e5a35fd420cb26d62e92ea82cced"} Jan 23 09:56:15 crc kubenswrapper[4982]: I0123 09:56:15.661749 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d58d5/crc-debug-mwm69" podStartSLOduration=2.448238087 podStartE2EDuration="29.661729259s" podCreationTimestamp="2026-01-23 09:55:46 +0000 UTC" firstStartedPulling="2026-01-23 09:55:47.21136419 +0000 UTC m=+7221.691727586" lastFinishedPulling="2026-01-23 09:56:14.424855372 +0000 UTC m=+7248.905218758" observedRunningTime="2026-01-23 09:56:15.654522755 +0000 UTC m=+7250.134886141" watchObservedRunningTime="2026-01-23 09:56:15.661729259 +0000 UTC m=+7250.142092665" Jan 23 09:56:25 crc kubenswrapper[4982]: I0123 09:56:25.840652 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:56:25 crc kubenswrapper[4982]: E0123 09:56:25.844108 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:56:28 crc kubenswrapper[4982]: I0123 09:56:28.843513 4982 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.755991935s: [/var/lib/containers/storage/overlay/9648dc03f8acf23020ba84df9e40f4702676c8eaf1083879412c89d7765cb255/diff /var/log/pods/openstack_placement-567fd77856-sgz5r_9fcf73c8-de5e-40df-9d9a-f2f237e71a8c/placement-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 09:56:40 crc kubenswrapper[4982]: I0123 09:56:40.827561 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:56:40 crc kubenswrapper[4982]: E0123 09:56:40.828514 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:56:55 crc kubenswrapper[4982]: I0123 09:56:55.837850 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:56:55 crc kubenswrapper[4982]: E0123 09:56:55.838648 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:57:00 crc kubenswrapper[4982]: I0123 09:57:00.128366 4982 generic.go:334] "Generic (PLEG): container finished" podID="f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" containerID="fab58a9a3c182b6a91452857d6b7bc05e2e2e5a35fd420cb26d62e92ea82cced" exitCode=0 Jan 23 09:57:00 crc kubenswrapper[4982]: I0123 09:57:00.128456 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/crc-debug-mwm69" event={"ID":"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09","Type":"ContainerDied","Data":"fab58a9a3c182b6a91452857d6b7bc05e2e2e5a35fd420cb26d62e92ea82cced"} Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.273196 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.321359 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d58d5/crc-debug-mwm69"] Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.331181 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d58d5/crc-debug-mwm69"] Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.447679 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q544\" (UniqueName: \"kubernetes.io/projected/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-kube-api-access-4q544\") pod \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.447826 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-host\") pod \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\" (UID: \"f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09\") " Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.447982 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-host" (OuterVolumeSpecName: "host") pod "f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" (UID: "f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.448497 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-host\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.466327 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-kube-api-access-4q544" (OuterVolumeSpecName: "kube-api-access-4q544") pod "f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" (UID: "f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09"). InnerVolumeSpecName "kube-api-access-4q544". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.551247 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q544\" (UniqueName: \"kubernetes.io/projected/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09-kube-api-access-4q544\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:01 crc kubenswrapper[4982]: I0123 09:57:01.849000 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" path="/var/lib/kubelet/pods/f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09/volumes" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.156031 4982 scope.go:117] "RemoveContainer" containerID="fab58a9a3c182b6a91452857d6b7bc05e2e2e5a35fd420cb26d62e92ea82cced" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.156150 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-mwm69" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.477658 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d58d5/crc-debug-k7zkg"] Jan 23 09:57:02 crc kubenswrapper[4982]: E0123 09:57:02.478150 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" containerName="container-00" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.478163 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" containerName="container-00" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.478434 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="f185f8c0-670f-4ea5-a8c0-bc0c7ddaff09" containerName="container-00" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.479216 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.596669 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwp5\" (UniqueName: \"kubernetes.io/projected/0d827881-8d81-4b42-a295-0f73c735bef6-kube-api-access-vfwp5\") pod \"crc-debug-k7zkg\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.596843 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d827881-8d81-4b42-a295-0f73c735bef6-host\") pod \"crc-debug-k7zkg\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.698853 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfwp5\" (UniqueName: \"kubernetes.io/projected/0d827881-8d81-4b42-a295-0f73c735bef6-kube-api-access-vfwp5\") pod \"crc-debug-k7zkg\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.699090 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d827881-8d81-4b42-a295-0f73c735bef6-host\") pod \"crc-debug-k7zkg\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.699291 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d827881-8d81-4b42-a295-0f73c735bef6-host\") pod \"crc-debug-k7zkg\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.753124 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfwp5\" (UniqueName: \"kubernetes.io/projected/0d827881-8d81-4b42-a295-0f73c735bef6-kube-api-access-vfwp5\") pod \"crc-debug-k7zkg\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:02 crc kubenswrapper[4982]: I0123 09:57:02.798677 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:03 crc kubenswrapper[4982]: I0123 09:57:03.170044 4982 generic.go:334] "Generic (PLEG): container finished" podID="0d827881-8d81-4b42-a295-0f73c735bef6" containerID="2651305c8f873551de2bbe7c819a66de21ee78fd4ccfa837cbe4cb549c250f09" exitCode=0 Jan 23 09:57:03 crc kubenswrapper[4982]: I0123 09:57:03.170112 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/crc-debug-k7zkg" event={"ID":"0d827881-8d81-4b42-a295-0f73c735bef6","Type":"ContainerDied","Data":"2651305c8f873551de2bbe7c819a66de21ee78fd4ccfa837cbe4cb549c250f09"} Jan 23 09:57:03 crc kubenswrapper[4982]: I0123 09:57:03.170808 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/crc-debug-k7zkg" event={"ID":"0d827881-8d81-4b42-a295-0f73c735bef6","Type":"ContainerStarted","Data":"0942a7279ceff4a390d0789f23ac2394d26b58687f4e494ed1f10a712cfea62d"} Jan 23 09:57:03 crc kubenswrapper[4982]: E0123 09:57:03.441169 4982 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d827881_8d81_4b42_a295_0f73c735bef6.slice/crio-conmon-2651305c8f873551de2bbe7c819a66de21ee78fd4ccfa837cbe4cb549c250f09.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d827881_8d81_4b42_a295_0f73c735bef6.slice/crio-2651305c8f873551de2bbe7c819a66de21ee78fd4ccfa837cbe4cb549c250f09.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:57:03 crc kubenswrapper[4982]: I0123 09:57:03.622131 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d58d5/crc-debug-k7zkg"] Jan 23 09:57:03 crc kubenswrapper[4982]: I0123 09:57:03.634778 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d58d5/crc-debug-k7zkg"] Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.331473 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.439800 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfwp5\" (UniqueName: \"kubernetes.io/projected/0d827881-8d81-4b42-a295-0f73c735bef6-kube-api-access-vfwp5\") pod \"0d827881-8d81-4b42-a295-0f73c735bef6\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.440287 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d827881-8d81-4b42-a295-0f73c735bef6-host\") pod \"0d827881-8d81-4b42-a295-0f73c735bef6\" (UID: \"0d827881-8d81-4b42-a295-0f73c735bef6\") " Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.440351 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d827881-8d81-4b42-a295-0f73c735bef6-host" (OuterVolumeSpecName: "host") pod "0d827881-8d81-4b42-a295-0f73c735bef6" (UID: "0d827881-8d81-4b42-a295-0f73c735bef6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.441149 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0d827881-8d81-4b42-a295-0f73c735bef6-host\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.446590 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d827881-8d81-4b42-a295-0f73c735bef6-kube-api-access-vfwp5" (OuterVolumeSpecName: "kube-api-access-vfwp5") pod "0d827881-8d81-4b42-a295-0f73c735bef6" (UID: "0d827881-8d81-4b42-a295-0f73c735bef6"). InnerVolumeSpecName "kube-api-access-vfwp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.543580 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfwp5\" (UniqueName: \"kubernetes.io/projected/0d827881-8d81-4b42-a295-0f73c735bef6-kube-api-access-vfwp5\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.900714 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d58d5/crc-debug-xcbct"] Jan 23 09:57:04 crc kubenswrapper[4982]: E0123 09:57:04.901275 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d827881-8d81-4b42-a295-0f73c735bef6" containerName="container-00" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.901293 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d827881-8d81-4b42-a295-0f73c735bef6" containerName="container-00" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.901573 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d827881-8d81-4b42-a295-0f73c735bef6" containerName="container-00" Jan 23 09:57:04 crc kubenswrapper[4982]: I0123 09:57:04.902406 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.057989 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d52c8f3a-c314-42b6-9330-cd98dbe07caa-host\") pod \"crc-debug-xcbct\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.058303 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcft7\" (UniqueName: \"kubernetes.io/projected/d52c8f3a-c314-42b6-9330-cd98dbe07caa-kube-api-access-tcft7\") pod \"crc-debug-xcbct\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.160296 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d52c8f3a-c314-42b6-9330-cd98dbe07caa-host\") pod \"crc-debug-xcbct\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.160488 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d52c8f3a-c314-42b6-9330-cd98dbe07caa-host\") pod \"crc-debug-xcbct\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.160503 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcft7\" (UniqueName: \"kubernetes.io/projected/d52c8f3a-c314-42b6-9330-cd98dbe07caa-kube-api-access-tcft7\") pod \"crc-debug-xcbct\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.189774 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcft7\" (UniqueName: \"kubernetes.io/projected/d52c8f3a-c314-42b6-9330-cd98dbe07caa-kube-api-access-tcft7\") pod \"crc-debug-xcbct\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.196849 4982 scope.go:117] "RemoveContainer" containerID="2651305c8f873551de2bbe7c819a66de21ee78fd4ccfa837cbe4cb549c250f09" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.197193 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-k7zkg" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.234268 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:05 crc kubenswrapper[4982]: I0123 09:57:05.848988 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d827881-8d81-4b42-a295-0f73c735bef6" path="/var/lib/kubelet/pods/0d827881-8d81-4b42-a295-0f73c735bef6/volumes" Jan 23 09:57:06 crc kubenswrapper[4982]: I0123 09:57:06.218063 4982 generic.go:334] "Generic (PLEG): container finished" podID="d52c8f3a-c314-42b6-9330-cd98dbe07caa" containerID="e1a94e975095d4a9b666ecdb1112ff8b488cff736d487c061fcdbc7019780cb4" exitCode=0 Jan 23 09:57:06 crc kubenswrapper[4982]: I0123 09:57:06.218133 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/crc-debug-xcbct" event={"ID":"d52c8f3a-c314-42b6-9330-cd98dbe07caa","Type":"ContainerDied","Data":"e1a94e975095d4a9b666ecdb1112ff8b488cff736d487c061fcdbc7019780cb4"} Jan 23 09:57:06 crc kubenswrapper[4982]: I0123 09:57:06.220020 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/crc-debug-xcbct" event={"ID":"d52c8f3a-c314-42b6-9330-cd98dbe07caa","Type":"ContainerStarted","Data":"47b11947dd6e9a71ebd4ab1660f4c23315144cf6f02b99b285f7fbd207590423"} Jan 23 09:57:06 crc kubenswrapper[4982]: I0123 09:57:06.276077 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d58d5/crc-debug-xcbct"] Jan 23 09:57:06 crc kubenswrapper[4982]: I0123 09:57:06.290182 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d58d5/crc-debug-xcbct"] Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.373876 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.517961 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcft7\" (UniqueName: \"kubernetes.io/projected/d52c8f3a-c314-42b6-9330-cd98dbe07caa-kube-api-access-tcft7\") pod \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.518049 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d52c8f3a-c314-42b6-9330-cd98dbe07caa-host\") pod \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\" (UID: \"d52c8f3a-c314-42b6-9330-cd98dbe07caa\") " Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.518169 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52c8f3a-c314-42b6-9330-cd98dbe07caa-host" (OuterVolumeSpecName: "host") pod "d52c8f3a-c314-42b6-9330-cd98dbe07caa" (UID: "d52c8f3a-c314-42b6-9330-cd98dbe07caa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.518822 4982 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d52c8f3a-c314-42b6-9330-cd98dbe07caa-host\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.525033 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52c8f3a-c314-42b6-9330-cd98dbe07caa-kube-api-access-tcft7" (OuterVolumeSpecName: "kube-api-access-tcft7") pod "d52c8f3a-c314-42b6-9330-cd98dbe07caa" (UID: "d52c8f3a-c314-42b6-9330-cd98dbe07caa"). InnerVolumeSpecName "kube-api-access-tcft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.620817 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcft7\" (UniqueName: \"kubernetes.io/projected/d52c8f3a-c314-42b6-9330-cd98dbe07caa-kube-api-access-tcft7\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.827831 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:57:07 crc kubenswrapper[4982]: E0123 09:57:07.828233 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:57:07 crc kubenswrapper[4982]: I0123 09:57:07.840089 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52c8f3a-c314-42b6-9330-cd98dbe07caa" path="/var/lib/kubelet/pods/d52c8f3a-c314-42b6-9330-cd98dbe07caa/volumes" Jan 23 09:57:08 crc kubenswrapper[4982]: I0123 09:57:08.246855 4982 scope.go:117] "RemoveContainer" containerID="e1a94e975095d4a9b666ecdb1112ff8b488cff736d487c061fcdbc7019780cb4" Jan 23 09:57:08 crc kubenswrapper[4982]: I0123 09:57:08.246893 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/crc-debug-xcbct" Jan 23 09:57:19 crc kubenswrapper[4982]: I0123 09:57:19.828209 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:57:19 crc kubenswrapper[4982]: E0123 09:57:19.828968 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.676051 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6w48"] Jan 23 09:57:24 crc kubenswrapper[4982]: E0123 09:57:24.677433 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52c8f3a-c314-42b6-9330-cd98dbe07caa" containerName="container-00" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.677454 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52c8f3a-c314-42b6-9330-cd98dbe07caa" containerName="container-00" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.677860 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52c8f3a-c314-42b6-9330-cd98dbe07caa" containerName="container-00" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.680876 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.687051 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6w48"] Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.777425 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-catalog-content\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.777648 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-utilities\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.777766 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2zn2\" (UniqueName: \"kubernetes.io/projected/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-kube-api-access-b2zn2\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.880943 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-utilities\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.881094 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2zn2\" (UniqueName: \"kubernetes.io/projected/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-kube-api-access-b2zn2\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.881665 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-utilities\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.882099 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-catalog-content\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.882464 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-catalog-content\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:24 crc kubenswrapper[4982]: I0123 09:57:24.917540 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2zn2\" (UniqueName: \"kubernetes.io/projected/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-kube-api-access-b2zn2\") pod \"certified-operators-x6w48\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:25 crc kubenswrapper[4982]: I0123 09:57:25.006010 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:25 crc kubenswrapper[4982]: I0123 09:57:25.969765 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6w48"] Jan 23 09:57:26 crc kubenswrapper[4982]: I0123 09:57:26.458938 4982 generic.go:334] "Generic (PLEG): container finished" podID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerID="39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8" exitCode=0 Jan 23 09:57:26 crc kubenswrapper[4982]: I0123 09:57:26.459045 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6w48" event={"ID":"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a","Type":"ContainerDied","Data":"39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8"} Jan 23 09:57:26 crc kubenswrapper[4982]: I0123 09:57:26.459210 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6w48" event={"ID":"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a","Type":"ContainerStarted","Data":"bf467819cc7d85e95d07541cc15930aea5b6234d7f10e4d873a50c7710089a9b"} Jan 23 09:57:28 crc kubenswrapper[4982]: I0123 09:57:28.483002 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6w48" event={"ID":"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a","Type":"ContainerStarted","Data":"6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd"} Jan 23 09:57:29 crc kubenswrapper[4982]: I0123 09:57:29.497200 4982 generic.go:334] "Generic (PLEG): container finished" podID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerID="6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd" exitCode=0 Jan 23 09:57:29 crc kubenswrapper[4982]: I0123 09:57:29.497361 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6w48" event={"ID":"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a","Type":"ContainerDied","Data":"6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd"} Jan 23 09:57:32 crc kubenswrapper[4982]: I0123 09:57:32.591729 4982 generic.go:334] "Generic (PLEG): container finished" podID="8f743137-4885-4abc-b136-19b84d136f27" containerID="56dc999fee95c3b1a9c14547a592a904613af7b619590987ba4b575f3c54e81e" exitCode=0 Jan 23 09:57:32 crc kubenswrapper[4982]: I0123 09:57:32.591809 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" event={"ID":"8f743137-4885-4abc-b136-19b84d136f27","Type":"ContainerDied","Data":"56dc999fee95c3b1a9c14547a592a904613af7b619590987ba4b575f3c54e81e"} Jan 23 09:57:32 crc kubenswrapper[4982]: I0123 09:57:32.595946 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6w48" event={"ID":"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a","Type":"ContainerStarted","Data":"a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37"} Jan 23 09:57:32 crc kubenswrapper[4982]: I0123 09:57:32.636147 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6w48" podStartSLOduration=3.01558411 podStartE2EDuration="8.636127587s" podCreationTimestamp="2026-01-23 09:57:24 +0000 UTC" firstStartedPulling="2026-01-23 09:57:26.460591046 +0000 UTC m=+7320.940954432" lastFinishedPulling="2026-01-23 09:57:32.081134523 +0000 UTC m=+7326.561497909" observedRunningTime="2026-01-23 09:57:32.625575593 +0000 UTC m=+7327.105938979" watchObservedRunningTime="2026-01-23 09:57:32.636127587 +0000 UTC m=+7327.116490973" Jan 23 09:57:33 crc kubenswrapper[4982]: I0123 09:57:33.830616 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:57:33 crc kubenswrapper[4982]: E0123 09:57:33.831511 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.221453 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.321667 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-ssh-key-openstack-cell1\") pod \"8f743137-4885-4abc-b136-19b84d136f27\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.321830 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-inventory\") pod \"8f743137-4885-4abc-b136-19b84d136f27\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.322063 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n297h\" (UniqueName: \"kubernetes.io/projected/8f743137-4885-4abc-b136-19b84d136f27-kube-api-access-n297h\") pod \"8f743137-4885-4abc-b136-19b84d136f27\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.322227 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-tripleo-cleanup-combined-ca-bundle\") pod \"8f743137-4885-4abc-b136-19b84d136f27\" (UID: \"8f743137-4885-4abc-b136-19b84d136f27\") " Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.335796 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "8f743137-4885-4abc-b136-19b84d136f27" (UID: "8f743137-4885-4abc-b136-19b84d136f27"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.348954 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f743137-4885-4abc-b136-19b84d136f27-kube-api-access-n297h" (OuterVolumeSpecName: "kube-api-access-n297h") pod "8f743137-4885-4abc-b136-19b84d136f27" (UID: "8f743137-4885-4abc-b136-19b84d136f27"). InnerVolumeSpecName "kube-api-access-n297h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.361772 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "8f743137-4885-4abc-b136-19b84d136f27" (UID: "8f743137-4885-4abc-b136-19b84d136f27"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.366643 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-inventory" (OuterVolumeSpecName: "inventory") pod "8f743137-4885-4abc-b136-19b84d136f27" (UID: "8f743137-4885-4abc-b136-19b84d136f27"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.426603 4982 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.426658 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n297h\" (UniqueName: \"kubernetes.io/projected/8f743137-4885-4abc-b136-19b84d136f27-kube-api-access-n297h\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.426672 4982 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.426687 4982 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/8f743137-4885-4abc-b136-19b84d136f27-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.619214 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" event={"ID":"8f743137-4885-4abc-b136-19b84d136f27","Type":"ContainerDied","Data":"07fd7d3d3f1b1c38bc9699e33a94ba22b4e639c6beb717b64fe1bf10485bab26"} Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.619257 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07fd7d3d3f1b1c38bc9699e33a94ba22b4e639c6beb717b64fe1bf10485bab26" Jan 23 09:57:34 crc kubenswrapper[4982]: I0123 09:57:34.619298 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6" Jan 23 09:57:35 crc kubenswrapper[4982]: I0123 09:57:35.007059 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:35 crc kubenswrapper[4982]: I0123 09:57:35.008224 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:35 crc kubenswrapper[4982]: I0123 09:57:35.056382 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:41 crc kubenswrapper[4982]: I0123 09:57:41.426609 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_085e24fa-319d-49ec-bad4-4f4b515e795a/init-config-reloader/0.log" Jan 23 09:57:42 crc kubenswrapper[4982]: I0123 09:57:42.508558 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_085e24fa-319d-49ec-bad4-4f4b515e795a/config-reloader/0.log" Jan 23 09:57:42 crc kubenswrapper[4982]: I0123 09:57:42.511876 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_085e24fa-319d-49ec-bad4-4f4b515e795a/init-config-reloader/0.log" Jan 23 09:57:42 crc kubenswrapper[4982]: I0123 09:57:42.590658 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_085e24fa-319d-49ec-bad4-4f4b515e795a/alertmanager/0.log" Jan 23 09:57:42 crc kubenswrapper[4982]: I0123 09:57:42.720070 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_efd75096-1aa2-4c86-9ea8-3d6c1bca5f90/aodh-api/0.log" Jan 23 09:57:42 crc kubenswrapper[4982]: I0123 09:57:42.772827 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_efd75096-1aa2-4c86-9ea8-3d6c1bca5f90/aodh-evaluator/0.log" Jan 23 09:57:42 crc kubenswrapper[4982]: I0123 09:57:42.839216 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_efd75096-1aa2-4c86-9ea8-3d6c1bca5f90/aodh-listener/0.log" Jan 23 09:57:42 crc kubenswrapper[4982]: I0123 09:57:42.876241 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_efd75096-1aa2-4c86-9ea8-3d6c1bca5f90/aodh-notifier/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.043974 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bf64c5f48-8qmjw_f12b3a0a-3ce2-43ff-892b-91f50f5268a6/barbican-api/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.096644 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-bf64c5f48-8qmjw_f12b3a0a-3ce2-43ff-892b-91f50f5268a6/barbican-api-log/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.240978 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647fc4b846-h6tq7_dc042064-7f87-4037-b952-4d1140d3e85b/barbican-keystone-listener/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.371761 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-647fc4b846-h6tq7_dc042064-7f87-4037-b952-4d1140d3e85b/barbican-keystone-listener-log/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.474575 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-586bb97bb5-gvcbp_48a0ed35-6d01-410a-9ed1-ff102bdc0afc/barbican-worker/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.540797 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-586bb97bb5-gvcbp_48a0ed35-6d01-410a-9ed1-ff102bdc0afc/barbican-worker-log/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.692455 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6373cfd-07f7-414a-b88f-b928c187d201/ceilometer-central-agent/1.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.764031 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6373cfd-07f7-414a-b88f-b928c187d201/ceilometer-central-agent/0.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.813515 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6373cfd-07f7-414a-b88f-b928c187d201/ceilometer-notification-agent/1.log" Jan 23 09:57:43 crc kubenswrapper[4982]: I0123 09:57:43.974555 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6373cfd-07f7-414a-b88f-b928c187d201/ceilometer-notification-agent/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.069339 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6373cfd-07f7-414a-b88f-b928c187d201/sg-core/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.152037 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e6373cfd-07f7-414a-b88f-b928c187d201/proxy-httpd/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.433800 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f71ec0e1-97be-48f5-9601-93cdf12b89a6/cinder-api-log/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.486196 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f71ec0e1-97be-48f5-9601-93cdf12b89a6/cinder-api/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.651579 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4a93b5e2-a1b2-427f-8bc6-013bef4e3255/cinder-scheduler/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.707450 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4a93b5e2-a1b2-427f-8bc6-013bef4e3255/probe/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.811345 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-776dc944c9-lsdm4_fa0a2f02-eb37-430e-8700-096aa7f44eb8/init/0.log" Jan 23 09:57:44 crc kubenswrapper[4982]: I0123 09:57:44.828024 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:57:44 crc kubenswrapper[4982]: E0123 09:57:44.828405 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.073424 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.080264 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-776dc944c9-lsdm4_fa0a2f02-eb37-430e-8700-096aa7f44eb8/init/0.log" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.089871 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-776dc944c9-lsdm4_fa0a2f02-eb37-430e-8700-096aa7f44eb8/dnsmasq-dns/0.log" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.147138 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_25d38494-7e3d-4359-869b-ac0460e2cf52/glance-httpd/0.log" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.268310 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6w48"] Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.327427 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_25d38494-7e3d-4359-869b-ac0460e2cf52/glance-log/0.log" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.476455 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ec5cd222-e178-4af8-aa45-d396d580d2d5/glance-httpd/0.log" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.486148 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ec5cd222-e178-4af8-aa45-d396d580d2d5/glance-log/0.log" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.745119 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x6w48" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="registry-server" containerID="cri-o://a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37" gracePeriod=2 Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.939227 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-78dc589b44-6kdjt_733f4324-f19a-4d66-b052-a58baa74f275/heat-api/0.log" Jan 23 09:57:45 crc kubenswrapper[4982]: I0123 09:57:45.944177 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6b79f4dfb5-5wf2j_094d987b-cf37-440a-b435-4083f47919bc/heat-cfnapi/0.log" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.029560 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6c94454b9f-m7jtc_880081ea-a53e-4a63-9ea8-031a62596458/heat-engine/0.log" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.349015 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5977db955d-zpcv9_0c23088a-731e-4f50-9faa-70f390b4f3df/horizon-log/0.log" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.389275 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.444306 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5977db955d-zpcv9_0c23088a-731e-4f50-9faa-70f390b4f3df/horizon/0.log" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.533652 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2zn2\" (UniqueName: \"kubernetes.io/projected/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-kube-api-access-b2zn2\") pod \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.533796 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-catalog-content\") pod \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.533863 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-utilities\") pod \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\" (UID: \"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a\") " Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.535717 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-utilities" (OuterVolumeSpecName: "utilities") pod "5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" (UID: "5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.541497 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-kube-api-access-b2zn2" (OuterVolumeSpecName: "kube-api-access-b2zn2") pod "5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" (UID: "5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a"). InnerVolumeSpecName "kube-api-access-b2zn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.591871 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" (UID: "5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.616591 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-c44b77bf8-p9kls_c27bb7f3-c125-4255-8fa7-fcdd0127119d/keystone-api/0.log" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.639312 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.639350 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2zn2\" (UniqueName: \"kubernetes.io/projected/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-kube-api-access-b2zn2\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.639359 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.676433 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_61c17c05-4452-412a-9434-b1997807b03c/adoption/0.log" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.701308 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_90a4acf4-9689-4375-b94f-fad12c76e1d8/kube-state-metrics/0.log" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.759202 4982 generic.go:334] "Generic (PLEG): container finished" podID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerID="a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37" exitCode=0 Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.759267 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6w48" event={"ID":"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a","Type":"ContainerDied","Data":"a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37"} Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.759275 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6w48" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.759307 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6w48" event={"ID":"5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a","Type":"ContainerDied","Data":"bf467819cc7d85e95d07541cc15930aea5b6234d7f10e4d873a50c7710089a9b"} Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.759333 4982 scope.go:117] "RemoveContainer" containerID="a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.812205 4982 scope.go:117] "RemoveContainer" containerID="6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.822685 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6w48"] Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.841868 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x6w48"] Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.850024 4982 scope.go:117] "RemoveContainer" containerID="39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.910477 4982 scope.go:117] "RemoveContainer" containerID="a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37" Jan 23 09:57:46 crc kubenswrapper[4982]: E0123 09:57:46.911317 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37\": container with ID starting with a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37 not found: ID does not exist" containerID="a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.911350 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37"} err="failed to get container status \"a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37\": rpc error: code = NotFound desc = could not find container \"a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37\": container with ID starting with a0e1fe0d565e8d43ebbac19ae656a05aa15902550206389ee616c2452a28aa37 not found: ID does not exist" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.911372 4982 scope.go:117] "RemoveContainer" containerID="6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd" Jan 23 09:57:46 crc kubenswrapper[4982]: E0123 09:57:46.911836 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd\": container with ID starting with 6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd not found: ID does not exist" containerID="6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.911881 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd"} err="failed to get container status \"6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd\": rpc error: code = NotFound desc = could not find container \"6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd\": container with ID starting with 6adea4a93868bc80a8597798ca9a5484f4210d45bc4a3e21aa46bdbdb3ea3acd not found: ID does not exist" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.911914 4982 scope.go:117] "RemoveContainer" containerID="39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8" Jan 23 09:57:46 crc kubenswrapper[4982]: E0123 09:57:46.912283 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8\": container with ID starting with 39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8 not found: ID does not exist" containerID="39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8" Jan 23 09:57:46 crc kubenswrapper[4982]: I0123 09:57:46.912305 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8"} err="failed to get container status \"39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8\": rpc error: code = NotFound desc = could not find container \"39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8\": container with ID starting with 39ee584ba933fa225b5063e60d9253de581327e2168659392a869ffb16bfd6f8 not found: ID does not exist" Jan 23 09:57:47 crc kubenswrapper[4982]: I0123 09:57:47.124522 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8455c4f66f-8h5kj_600464ca-45e6-4dd1-9863-50d1f839fd83/neutron-httpd/0.log" Jan 23 09:57:47 crc kubenswrapper[4982]: I0123 09:57:47.158061 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8455c4f66f-8h5kj_600464ca-45e6-4dd1-9863-50d1f839fd83/neutron-api/0.log" Jan 23 09:57:47 crc kubenswrapper[4982]: I0123 09:57:47.480464 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0338c87c-5dfe-406d-adde-089b0e43220f/nova-api-log/0.log" Jan 23 09:57:47 crc kubenswrapper[4982]: I0123 09:57:47.704579 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_df7cd99d-7e67-413e-997c-beec4f6b2624/nova-cell0-conductor-conductor/0.log" Jan 23 09:57:47 crc kubenswrapper[4982]: I0123 09:57:47.761226 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0338c87c-5dfe-406d-adde-089b0e43220f/nova-api-api/0.log" Jan 23 09:57:47 crc kubenswrapper[4982]: I0123 09:57:47.829519 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_17176ca4-3624-4cef-95d5-0f36fa599d49/nova-cell1-conductor-conductor/0.log" Jan 23 09:57:47 crc kubenswrapper[4982]: I0123 09:57:47.843763 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" path="/var/lib/kubelet/pods/5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a/volumes" Jan 23 09:57:48 crc kubenswrapper[4982]: I0123 09:57:48.092768 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4aa6ed7d-f756-4ae3-b708-0f172751af6d/nova-cell1-novncproxy-novncproxy/0.log" Jan 23 09:57:48 crc kubenswrapper[4982]: I0123 09:57:48.466301 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_50482909-3a21-4256-845b-3967c00bcf1b/nova-metadata-log/0.log" Jan 23 09:57:48 crc kubenswrapper[4982]: I0123 09:57:48.709096 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_9ac284cc-3278-42cd-9a34-466f7dd97387/nova-scheduler-scheduler/0.log" Jan 23 09:57:48 crc kubenswrapper[4982]: I0123 09:57:48.899266 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7dd5cd75f7-rrmc7_c4cdb462-46e9-49e7-b539-992fcf1d6e67/init/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.146153 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7dd5cd75f7-rrmc7_c4cdb462-46e9-49e7-b539-992fcf1d6e67/init/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.154095 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7dd5cd75f7-rrmc7_c4cdb462-46e9-49e7-b539-992fcf1d6e67/octavia-api-provider-agent/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.317614 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_50482909-3a21-4256-845b-3967c00bcf1b/nova-metadata-metadata/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.340066 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-7dd5cd75f7-rrmc7_c4cdb462-46e9-49e7-b539-992fcf1d6e67/octavia-api/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.374403 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-bjmrt_816268cf-a757-484e-83df-b8401a8e03b3/init/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.636441 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-bjmrt_816268cf-a757-484e-83df-b8401a8e03b3/init/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.698714 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-6q5jz_5340823e-9b64-4b44-83a6-f42c593d5bea/init/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.734557 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-bjmrt_816268cf-a757-484e-83df-b8401a8e03b3/octavia-healthmanager/0.log" Jan 23 09:57:49 crc kubenswrapper[4982]: I0123 09:57:49.943511 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-6q5jz_5340823e-9b64-4b44-83a6-f42c593d5bea/octavia-housekeeping/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.028766 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-6q5jz_5340823e-9b64-4b44-83a6-f42c593d5bea/init/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.046594 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-7b97d6bc64-nlx7l_36fc9595-1ab2-42ec-ab65-f1e979daf0fd/init/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.227825 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-7b97d6bc64-nlx7l_36fc9595-1ab2-42ec-ab65-f1e979daf0fd/init/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.342497 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-7b97d6bc64-nlx7l_36fc9595-1ab2-42ec-ab65-f1e979daf0fd/octavia-amphora-httpd/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.350766 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2b1a7f5c-0b54-4fd8-b231-134061dc72be/memcached/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.376263 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-drt6w_edc4b282-8200-4c45-8216-3be60dceea0e/init/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.540237 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-drt6w_edc4b282-8200-4c45-8216-3be60dceea0e/init/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.590033 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-lrbkl_df37afea-7f1b-411e-ad95-cac7051ac0c5/init/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.593975 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-drt6w_edc4b282-8200-4c45-8216-3be60dceea0e/octavia-rsyslog/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.741751 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-lrbkl_df37afea-7f1b-411e-ad95-cac7051ac0c5/init/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.831762 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-frs55"] Jan 23 09:57:50 crc kubenswrapper[4982]: E0123 09:57:50.836290 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="extract-content" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.836318 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="extract-content" Jan 23 09:57:50 crc kubenswrapper[4982]: E0123 09:57:50.836374 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f743137-4885-4abc-b136-19b84d136f27" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.836381 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f743137-4885-4abc-b136-19b84d136f27" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 23 09:57:50 crc kubenswrapper[4982]: E0123 09:57:50.836392 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="registry-server" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.836398 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="registry-server" Jan 23 09:57:50 crc kubenswrapper[4982]: E0123 09:57:50.836428 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="extract-utilities" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.836434 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="extract-utilities" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.836669 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f743137-4885-4abc-b136-19b84d136f27" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.836687 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd1e5cf-56b7-42ea-989e-e1f5eb8e729a" containerName="registry-server" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.845261 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.845451 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-frs55"] Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.897512 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-utilities\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.897665 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pr4z\" (UniqueName: \"kubernetes.io/projected/46b6c886-4556-4588-96ae-3075be55578b-kube-api-access-2pr4z\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.897900 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-catalog-content\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.910687 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_945dc347-e5e3-4533-ab71-b8b2d051fdeb/mysql-bootstrap/0.log" Jan 23 09:57:50 crc kubenswrapper[4982]: I0123 09:57:50.912969 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-lrbkl_df37afea-7f1b-411e-ad95-cac7051ac0c5/octavia-worker/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.000195 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-catalog-content\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.000260 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-utilities\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.000324 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pr4z\" (UniqueName: \"kubernetes.io/projected/46b6c886-4556-4588-96ae-3075be55578b-kube-api-access-2pr4z\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.000827 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-utilities\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.000876 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-catalog-content\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.022807 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pr4z\" (UniqueName: \"kubernetes.io/projected/46b6c886-4556-4588-96ae-3075be55578b-kube-api-access-2pr4z\") pod \"redhat-operators-frs55\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.117062 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_945dc347-e5e3-4533-ab71-b8b2d051fdeb/galera/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.131527 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1e221cae-f304-4f00-be53-5a72edbf3bd7/mysql-bootstrap/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.140061 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_945dc347-e5e3-4533-ab71-b8b2d051fdeb/mysql-bootstrap/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.173575 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.474428 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1e221cae-f304-4f00-be53-5a72edbf3bd7/galera/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.486175 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1e221cae-f304-4f00-be53-5a72edbf3bd7/mysql-bootstrap/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.550679 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0521df85-12eb-406a-a7db-b876ddc88d26/openstackclient/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.687817 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fpsz6_bcccbc39-8bcd-4bf4-81a8-5078ae7329c7/ovn-controller/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.755722 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-frs55"] Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.856923 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frs55" event={"ID":"46b6c886-4556-4588-96ae-3075be55578b","Type":"ContainerStarted","Data":"4ee9b410998afc564fff21f7d8988b3248966949f1755359e489c44ded1d4f2d"} Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.914966 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qvfjr_240ff876-28c0-4c64-8157-205c9719f944/openstack-network-exporter/0.log" Jan 23 09:57:51 crc kubenswrapper[4982]: I0123 09:57:51.945119 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kkldr_b9e18733-c4a2-4870-b4c4-bdb7133bab43/ovsdb-server-init/0.log" Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.312654 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kkldr_b9e18733-c4a2-4870-b4c4-bdb7133bab43/ovs-vswitchd/0.log" Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.339436 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_47974c2a-39ba-4bbc-95ee-c27fda8fdc9c/adoption/0.log" Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.347573 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kkldr_b9e18733-c4a2-4870-b4c4-bdb7133bab43/ovsdb-server/0.log" Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.348655 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-kkldr_b9e18733-c4a2-4870-b4c4-bdb7133bab43/ovsdb-server-init/0.log" Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.837657 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0deaa403-a6a8-45d1-8bae-ca5413dc2a82/ovn-northd/0.log" Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.837861 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0deaa403-a6a8-45d1-8bae-ca5413dc2a82/openstack-network-exporter/0.log" Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.868318 4982 generic.go:334] "Generic (PLEG): container finished" podID="46b6c886-4556-4588-96ae-3075be55578b" containerID="1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883" exitCode=0 Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.868365 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frs55" event={"ID":"46b6c886-4556-4588-96ae-3075be55578b","Type":"ContainerDied","Data":"1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883"} Jan 23 09:57:52 crc kubenswrapper[4982]: I0123 09:57:52.974809 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9c94a864-3d02-4070-a3e8-23791aef5c40/openstack-network-exporter/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.168147 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9c94a864-3d02-4070-a3e8-23791aef5c40/ovsdbserver-nb/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.229298 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_19d8b73a-7c62-4cf2-ba91-87a7ceb346ed/openstack-network-exporter/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.235125 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_19d8b73a-7c62-4cf2-ba91-87a7ceb346ed/ovsdbserver-nb/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.409330 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_14033a67-7424-45ac-8ec1-3230fa3d6672/openstack-network-exporter/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.433559 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_14033a67-7424-45ac-8ec1-3230fa3d6672/ovsdbserver-nb/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.578082 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eaf53cb7-e27b-4fee-be64-1cb37bdda7cf/ovsdbserver-sb/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.587297 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_eaf53cb7-e27b-4fee-be64-1cb37bdda7cf/openstack-network-exporter/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.714288 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1/openstack-network-exporter/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.734261 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_038a60c8-ea6e-4f87-bc2a-a0851cc2fdb1/ovsdbserver-sb/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.853108 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_697244a8-6ee6-40b8-8575-3ecc60e5b34b/openstack-network-exporter/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.885587 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frs55" event={"ID":"46b6c886-4556-4588-96ae-3075be55578b","Type":"ContainerStarted","Data":"36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40"} Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.918192 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_697244a8-6ee6-40b8-8575-3ecc60e5b34b/ovsdbserver-sb/0.log" Jan 23 09:57:53 crc kubenswrapper[4982]: I0123 09:57:53.989607 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-567fd77856-sgz5r_9fcf73c8-de5e-40df-9d9a-f2f237e71a8c/placement-api/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.133654 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-567fd77856-sgz5r_9fcf73c8-de5e-40df-9d9a-f2f237e71a8c/placement-log/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.190320 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-ctdnkg_1d05323a-35b9-413c-a48f-6ffa11d34423/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.296762 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c55dbd-0f43-4959-9035-a1d07894d0bb/init-config-reloader/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.500519 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c55dbd-0f43-4959-9035-a1d07894d0bb/init-config-reloader/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.505636 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c55dbd-0f43-4959-9035-a1d07894d0bb/config-reloader/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.544569 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c55dbd-0f43-4959-9035-a1d07894d0bb/prometheus/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.559448 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c55dbd-0f43-4959-9035-a1d07894d0bb/thanos-sidecar/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.578641 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_06c55dbd-0f43-4959-9035-a1d07894d0bb/prometheus/1.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.736438 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65dd14b2-00d6-4e7f-a06d-c47397d9236a/setup-container/0.log" Jan 23 09:57:54 crc kubenswrapper[4982]: I0123 09:57:54.913454 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65dd14b2-00d6-4e7f-a06d-c47397d9236a/setup-container/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.038063 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4da68286-4aaa-4c7e-8738-e41bdc402dc1/setup-container/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.208433 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4da68286-4aaa-4c7e-8738-e41bdc402dc1/setup-container/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.286733 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_65dd14b2-00d6-4e7f-a06d-c47397d9236a/rabbitmq/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.455508 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-949fbbb54-chlc7_7e007917-bb3f-4c30-a0a6-842f1a1cbb34/proxy-server/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.466419 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4da68286-4aaa-4c7e-8738-e41bdc402dc1/rabbitmq/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.498159 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-949fbbb54-chlc7_7e007917-bb3f-4c30-a0a6-842f1a1cbb34/proxy-httpd/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.553443 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-8hv7d_6e957df1-1e40-4230-9db0-af1795aebbc2/swift-ring-rebalance/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.759888 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-nxsz6_8f743137-4885-4abc-b136-19b84d136f27/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Jan 23 09:57:55 crc kubenswrapper[4982]: I0123 09:57:55.837877 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:57:55 crc kubenswrapper[4982]: E0123 09:57:55.838355 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:57:57 crc kubenswrapper[4982]: I0123 09:57:57.928207 4982 generic.go:334] "Generic (PLEG): container finished" podID="46b6c886-4556-4588-96ae-3075be55578b" containerID="36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40" exitCode=0 Jan 23 09:57:57 crc kubenswrapper[4982]: I0123 09:57:57.928315 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frs55" event={"ID":"46b6c886-4556-4588-96ae-3075be55578b","Type":"ContainerDied","Data":"36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40"} Jan 23 09:57:59 crc kubenswrapper[4982]: I0123 09:57:59.950824 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frs55" event={"ID":"46b6c886-4556-4588-96ae-3075be55578b","Type":"ContainerStarted","Data":"532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544"} Jan 23 09:58:01 crc kubenswrapper[4982]: I0123 09:58:01.174712 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:58:01 crc kubenswrapper[4982]: I0123 09:58:01.175063 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:58:02 crc kubenswrapper[4982]: I0123 09:58:02.249343 4982 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-frs55" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="registry-server" probeResult="failure" output=< Jan 23 09:58:02 crc kubenswrapper[4982]: timeout: failed to connect service ":50051" within 1s Jan 23 09:58:02 crc kubenswrapper[4982]: > Jan 23 09:58:10 crc kubenswrapper[4982]: I0123 09:58:10.829231 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:58:10 crc kubenswrapper[4982]: E0123 09:58:10.830550 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:58:11 crc kubenswrapper[4982]: I0123 09:58:11.228019 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:58:11 crc kubenswrapper[4982]: I0123 09:58:11.248431 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-frs55" podStartSLOduration=15.025689031 podStartE2EDuration="21.248409132s" podCreationTimestamp="2026-01-23 09:57:50 +0000 UTC" firstStartedPulling="2026-01-23 09:57:52.870459795 +0000 UTC m=+7347.350823181" lastFinishedPulling="2026-01-23 09:57:59.093179896 +0000 UTC m=+7353.573543282" observedRunningTime="2026-01-23 09:57:59.978391879 +0000 UTC m=+7354.458755275" watchObservedRunningTime="2026-01-23 09:58:11.248409132 +0000 UTC m=+7365.728772518" Jan 23 09:58:11 crc kubenswrapper[4982]: I0123 09:58:11.281190 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:58:11 crc kubenswrapper[4982]: I0123 09:58:11.470064 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-frs55"] Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.091907 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-frs55" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="registry-server" containerID="cri-o://532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544" gracePeriod=2 Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.552219 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.684754 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-utilities\") pod \"46b6c886-4556-4588-96ae-3075be55578b\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.684905 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pr4z\" (UniqueName: \"kubernetes.io/projected/46b6c886-4556-4588-96ae-3075be55578b-kube-api-access-2pr4z\") pod \"46b6c886-4556-4588-96ae-3075be55578b\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.685059 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-catalog-content\") pod \"46b6c886-4556-4588-96ae-3075be55578b\" (UID: \"46b6c886-4556-4588-96ae-3075be55578b\") " Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.695325 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-utilities" (OuterVolumeSpecName: "utilities") pod "46b6c886-4556-4588-96ae-3075be55578b" (UID: "46b6c886-4556-4588-96ae-3075be55578b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.700937 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b6c886-4556-4588-96ae-3075be55578b-kube-api-access-2pr4z" (OuterVolumeSpecName: "kube-api-access-2pr4z") pod "46b6c886-4556-4588-96ae-3075be55578b" (UID: "46b6c886-4556-4588-96ae-3075be55578b"). InnerVolumeSpecName "kube-api-access-2pr4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.787186 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.787216 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pr4z\" (UniqueName: \"kubernetes.io/projected/46b6c886-4556-4588-96ae-3075be55578b-kube-api-access-2pr4z\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.814520 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46b6c886-4556-4588-96ae-3075be55578b" (UID: "46b6c886-4556-4588-96ae-3075be55578b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 09:58:13 crc kubenswrapper[4982]: I0123 09:58:13.891759 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b6c886-4556-4588-96ae-3075be55578b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.104988 4982 generic.go:334] "Generic (PLEG): container finished" podID="46b6c886-4556-4588-96ae-3075be55578b" containerID="532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544" exitCode=0 Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.105042 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frs55" event={"ID":"46b6c886-4556-4588-96ae-3075be55578b","Type":"ContainerDied","Data":"532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544"} Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.105069 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-frs55" event={"ID":"46b6c886-4556-4588-96ae-3075be55578b","Type":"ContainerDied","Data":"4ee9b410998afc564fff21f7d8988b3248966949f1755359e489c44ded1d4f2d"} Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.105084 4982 scope.go:117] "RemoveContainer" containerID="532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.105260 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-frs55" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.133190 4982 scope.go:117] "RemoveContainer" containerID="36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.138093 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-frs55"] Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.152331 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-frs55"] Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.153931 4982 scope.go:117] "RemoveContainer" containerID="1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.227112 4982 scope.go:117] "RemoveContainer" containerID="532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544" Jan 23 09:58:14 crc kubenswrapper[4982]: E0123 09:58:14.228054 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544\": container with ID starting with 532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544 not found: ID does not exist" containerID="532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.228110 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544"} err="failed to get container status \"532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544\": rpc error: code = NotFound desc = could not find container \"532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544\": container with ID starting with 532751c9cc658e8aa6d37efe1a89d67b1ab125a001836b9dd303dde5e41da544 not found: ID does not exist" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.228137 4982 scope.go:117] "RemoveContainer" containerID="36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40" Jan 23 09:58:14 crc kubenswrapper[4982]: E0123 09:58:14.229620 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40\": container with ID starting with 36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40 not found: ID does not exist" containerID="36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.229688 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40"} err="failed to get container status \"36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40\": rpc error: code = NotFound desc = could not find container \"36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40\": container with ID starting with 36e94fbab268bfeccecf23b5342a804b3d64da037218ad07f7df0944bdd78c40 not found: ID does not exist" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.229720 4982 scope.go:117] "RemoveContainer" containerID="1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883" Jan 23 09:58:14 crc kubenswrapper[4982]: E0123 09:58:14.231080 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883\": container with ID starting with 1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883 not found: ID does not exist" containerID="1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883" Jan 23 09:58:14 crc kubenswrapper[4982]: I0123 09:58:14.231132 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883"} err="failed to get container status \"1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883\": rpc error: code = NotFound desc = could not find container \"1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883\": container with ID starting with 1fed48f8c349cd40dfefc3dbd735a0b06877ed8f76a0f4efcc204614c78de883 not found: ID does not exist" Jan 23 09:58:15 crc kubenswrapper[4982]: I0123 09:58:15.840611 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b6c886-4556-4588-96ae-3075be55578b" path="/var/lib/kubelet/pods/46b6c886-4556-4588-96ae-3075be55578b/volumes" Jan 23 09:58:23 crc kubenswrapper[4982]: I0123 09:58:23.379270 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp_524eccad-4e68-4e0c-84e9-bc292ad9eb25/util/0.log" Jan 23 09:58:23 crc kubenswrapper[4982]: I0123 09:58:23.548753 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp_524eccad-4e68-4e0c-84e9-bc292ad9eb25/util/0.log" Jan 23 09:58:23 crc kubenswrapper[4982]: I0123 09:58:23.608748 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp_524eccad-4e68-4e0c-84e9-bc292ad9eb25/pull/0.log" Jan 23 09:58:23 crc kubenswrapper[4982]: I0123 09:58:23.621945 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp_524eccad-4e68-4e0c-84e9-bc292ad9eb25/pull/0.log" Jan 23 09:58:23 crc kubenswrapper[4982]: I0123 09:58:23.786041 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp_524eccad-4e68-4e0c-84e9-bc292ad9eb25/util/0.log" Jan 23 09:58:23 crc kubenswrapper[4982]: I0123 09:58:23.831556 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp_524eccad-4e68-4e0c-84e9-bc292ad9eb25/extract/0.log" Jan 23 09:58:23 crc kubenswrapper[4982]: I0123 09:58:23.875875 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_97bb1f8024535e829fa8894f35597f6754858047b9fee802213b02de86zt2zp_524eccad-4e68-4e0c-84e9-bc292ad9eb25/pull/0.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.005951 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-jhcmh_0167eb4d-3772-402e-8451-69789e17dd5e/manager/1.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.145197 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-jhcmh_0167eb4d-3772-402e-8451-69789e17dd5e/manager/0.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.157777 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-8rr6c_1be00a16-06c6-4fe5-9626-149c854680f8/manager/1.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.292046 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-8rr6c_1be00a16-06c6-4fe5-9626-149c854680f8/manager/0.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.375733 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-pm6mp_a00a88ad-d184-4394-9231-4da810afad02/manager/0.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.403766 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-pm6mp_a00a88ad-d184-4394-9231-4da810afad02/manager/1.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.615135 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-glq8j_d6169561-3b43-4dcd-8fa6-f08d484bccdf/manager/1.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.755369 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-glq8j_d6169561-3b43-4dcd-8fa6-f08d484bccdf/manager/0.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.802137 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gvwjl_5656681c-2a30-47fa-bb61-f33660c36db0/manager/1.log" Jan 23 09:58:24 crc kubenswrapper[4982]: I0123 09:58:24.934212 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-gvwjl_5656681c-2a30-47fa-bb61-f33660c36db0/manager/0.log" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.043416 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-44mmv_5285fcd9-54a1-4499-8930-af660f95709a/manager/1.log" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.051762 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-44mmv_5285fcd9-54a1-4499-8930-af660f95709a/manager/0.log" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.386268 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-gvxjc_801f503b-36aa-441d-9487-8fedc3365d46/manager/1.log" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.592202 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-tt4pp_854d661c-b4ec-4046-85d3-3e643336d93c/manager/1.log" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.606189 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-tt4pp_854d661c-b4ec-4046-85d3-3e643336d93c/manager/0.log" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.819340 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-gvxjc_801f503b-36aa-441d-9487-8fedc3365d46/manager/0.log" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.836389 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:58:25 crc kubenswrapper[4982]: E0123 09:58:25.836711 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:58:25 crc kubenswrapper[4982]: I0123 09:58:25.918821 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4nz5s_ae1bdd5c-8116-47e9-9344-f1d84794541c/manager/1.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.055817 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-hfrk2_6f86042f-a06a-4756-96ae-19de64621eb4/manager/1.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.122452 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-4nz5s_ae1bdd5c-8116-47e9-9344-f1d84794541c/manager/0.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.162584 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-hfrk2_6f86042f-a06a-4756-96ae-19de64621eb4/manager/0.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.312949 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-lcpcl_cb115f06-d81c-4b67-af3f-454ffd70d358/manager/1.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.435066 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-lcpcl_cb115f06-d81c-4b67-af3f-454ffd70d358/manager/0.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.515198 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-5xrcb_a73a3899-9dfc-404d-b7cd-7eaf32de406a/manager/1.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.589606 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-5xrcb_a73a3899-9dfc-404d-b7cd-7eaf32de406a/manager/0.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.695131 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-mwfjx_9695f150-036e-4e4b-a857-7af97f1576e5/manager/1.log" Jan 23 09:58:26 crc kubenswrapper[4982]: I0123 09:58:26.877443 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-mwfjx_9695f150-036e-4e4b-a857-7af97f1576e5/manager/0.log" Jan 23 09:58:27 crc kubenswrapper[4982]: I0123 09:58:27.001164 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q4hlv_efce0465-d792-44d6-b0c6-48f3e74ccd7a/manager/1.log" Jan 23 09:58:27 crc kubenswrapper[4982]: I0123 09:58:27.001476 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-q4hlv_efce0465-d792-44d6-b0c6-48f3e74ccd7a/manager/0.log" Jan 23 09:58:27 crc kubenswrapper[4982]: I0123 09:58:27.359470 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn_3d5f4f8b-84b1-45e2-b393-254c24c090a8/manager/1.log" Jan 23 09:58:27 crc kubenswrapper[4982]: I0123 09:58:27.489254 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c9c58b557pdqzn_3d5f4f8b-84b1-45e2-b393-254c24c090a8/manager/0.log" Jan 23 09:58:27 crc kubenswrapper[4982]: I0123 09:58:27.622128 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-85bfd44c94-wq88d_b5cf4a40-effb-4d2e-8d9b-a7818151f189/operator/1.log" Jan 23 09:58:27 crc kubenswrapper[4982]: I0123 09:58:27.757016 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57c46955cf-9c26t_2502f90a-1eb4-4fca-ae97-f9624f9caa2c/manager/1.log" Jan 23 09:58:27 crc kubenswrapper[4982]: I0123 09:58:27.825510 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-85bfd44c94-wq88d_b5cf4a40-effb-4d2e-8d9b-a7818151f189/operator/0.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.193080 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-s7lls_b74d47b4-62e8-4dd7-b89f-4a8177287c3a/registry-server/0.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.281748 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-r2wx4_91102982-af50-41cf-9969-28c92bdf8f36/manager/1.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.432819 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-mwfhl_75a3ce99-813c-45c1-9b30-0bb0b47f26a6/manager/1.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.518462 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-r2wx4_91102982-af50-41cf-9969-28c92bdf8f36/manager/0.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.564551 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-mwfhl_75a3ce99-813c-45c1-9b30-0bb0b47f26a6/manager/0.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.689766 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nc6w7_96c1e2e5-87b0-4341-83cd-5b623de95215/operator/1.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.815337 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nc6w7_96c1e2e5-87b0-4341-83cd-5b623de95215/operator/0.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.911713 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-lqf89_e57a3249-8bfe-49da-88fa-dbc887422c3c/manager/1.log" Jan 23 09:58:28 crc kubenswrapper[4982]: I0123 09:58:28.996010 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-lqf89_e57a3249-8bfe-49da-88fa-dbc887422c3c/manager/0.log" Jan 23 09:58:29 crc kubenswrapper[4982]: I0123 09:58:29.138526 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-dddts_34541257-bf45-41a5-a756-393b32053416/manager/1.log" Jan 23 09:58:29 crc kubenswrapper[4982]: I0123 09:58:29.371483 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-p6jkk_564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c/manager/1.log" Jan 23 09:58:29 crc kubenswrapper[4982]: I0123 09:58:29.393092 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-dddts_34541257-bf45-41a5-a756-393b32053416/manager/0.log" Jan 23 09:58:29 crc kubenswrapper[4982]: I0123 09:58:29.418085 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-p6jkk_564c51fc-a4e2-4cfa-afd6-75eff2fc5b4c/manager/0.log" Jan 23 09:58:29 crc kubenswrapper[4982]: I0123 09:58:29.650856 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-ptg2h_16b1155b-baa1-467b-947f-36673d41a1a0/manager/1.log" Jan 23 09:58:29 crc kubenswrapper[4982]: I0123 09:58:29.664125 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-ptg2h_16b1155b-baa1-467b-947f-36673d41a1a0/manager/0.log" Jan 23 09:58:29 crc kubenswrapper[4982]: I0123 09:58:29.951492 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-57c46955cf-9c26t_2502f90a-1eb4-4fca-ae97-f9624f9caa2c/manager/0.log" Jan 23 09:58:38 crc kubenswrapper[4982]: I0123 09:58:38.829041 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:58:38 crc kubenswrapper[4982]: E0123 09:58:38.830784 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:58:49 crc kubenswrapper[4982]: I0123 09:58:49.548898 4982 scope.go:117] "RemoveContainer" containerID="8084e4f1756a0d69a9d8b4fb34535db44137cbf65124656a90263173e66839ae" Jan 23 09:58:49 crc kubenswrapper[4982]: I0123 09:58:49.575556 4982 scope.go:117] "RemoveContainer" containerID="4116066cfb4f2814db0fa57887692b970eb51681c7c909694d12d04a6298be12" Jan 23 09:58:49 crc kubenswrapper[4982]: I0123 09:58:49.649692 4982 scope.go:117] "RemoveContainer" containerID="c9d071a9ce7005d9c9625e2d9784c5bb399f532f4f19f7399852e9cfab67416d" Jan 23 09:58:50 crc kubenswrapper[4982]: I0123 09:58:50.104514 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xpksx_6c3f6594-0f93-41e5-89c4-96cc33522f7f/control-plane-machine-set-operator/0.log" Jan 23 09:58:50 crc kubenswrapper[4982]: I0123 09:58:50.372721 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-htgd9_e2a34d4c-ce3b-47ce-929d-c366188eca0a/kube-rbac-proxy/0.log" Jan 23 09:58:50 crc kubenswrapper[4982]: I0123 09:58:50.389415 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-htgd9_e2a34d4c-ce3b-47ce-929d-c366188eca0a/machine-api-operator/0.log" Jan 23 09:58:53 crc kubenswrapper[4982]: I0123 09:58:53.828515 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:58:53 crc kubenswrapper[4982]: E0123 09:58:53.829344 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:59:02 crc kubenswrapper[4982]: I0123 09:59:02.896929 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-2m4pf_082e5308-ee9a-4c17-a4a6-637a6e17b684/cert-manager-controller/0.log" Jan 23 09:59:02 crc kubenswrapper[4982]: I0123 09:59:02.999688 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-7wzp4_d98fe1e0-fe9c-4bc5-8de4-eed0d7181d6a/cert-manager-cainjector/0.log" Jan 23 09:59:03 crc kubenswrapper[4982]: I0123 09:59:03.105042 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-ss8hx_65f6cb92-3c6d-4e3c-b7af-734426f97f89/cert-manager-webhook/0.log" Jan 23 09:59:05 crc kubenswrapper[4982]: I0123 09:59:05.835586 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:59:05 crc kubenswrapper[4982]: E0123 09:59:05.837333 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:59:15 crc kubenswrapper[4982]: I0123 09:59:15.310057 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-zxgxq_c9f0bd14-9c31-4898-ae22-e0890f02ffb7/nmstate-console-plugin/0.log" Jan 23 09:59:15 crc kubenswrapper[4982]: I0123 09:59:15.488249 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-vfwzn_3fb4fc7d-a353-4a2f-a374-b960e7405e77/nmstate-handler/0.log" Jan 23 09:59:15 crc kubenswrapper[4982]: I0123 09:59:15.535385 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-89w8z_1eeec26f-f857-4cfd-bc3a-54ff1e371b06/kube-rbac-proxy/0.log" Jan 23 09:59:15 crc kubenswrapper[4982]: I0123 09:59:15.616224 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-89w8z_1eeec26f-f857-4cfd-bc3a-54ff1e371b06/nmstate-metrics/0.log" Jan 23 09:59:15 crc kubenswrapper[4982]: I0123 09:59:15.696183 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-rxmvn_f58a63b0-6c3e-4307-b336-8dacff6ec7ee/nmstate-operator/0.log" Jan 23 09:59:15 crc kubenswrapper[4982]: I0123 09:59:15.812534 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-lp5mm_f8e1934e-1ae7-4e66-8b7d-c49c55350e43/nmstate-webhook/0.log" Jan 23 09:59:16 crc kubenswrapper[4982]: I0123 09:59:16.827450 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:59:16 crc kubenswrapper[4982]: E0123 09:59:16.828147 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:59:31 crc kubenswrapper[4982]: I0123 09:59:31.216197 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7_b55ae8cf-edc4-4236-999e-e9db7006318f/prometheus-operator-admission-webhook/0.log" Jan 23 09:59:31 crc kubenswrapper[4982]: I0123 09:59:31.218504 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-f79d544bd-xcv49_e540fff1-acc7-4d53-8691-95f7e6ac66b2/prometheus-operator-admission-webhook/0.log" Jan 23 09:59:31 crc kubenswrapper[4982]: I0123 09:59:31.219922 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q4hk4_f469aaaa-fc2d-4b94-8686-754943207b00/prometheus-operator/0.log" Jan 23 09:59:31 crc kubenswrapper[4982]: I0123 09:59:31.449751 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dqwbd_bf92fd73-e352-4046-8990-ada508e159a0/operator/0.log" Jan 23 09:59:31 crc kubenswrapper[4982]: I0123 09:59:31.451230 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8snlk_5abb74fc-c030-4b49-a655-8ef50e5a4a6a/perses-operator/0.log" Jan 23 09:59:31 crc kubenswrapper[4982]: I0123 09:59:31.827191 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:59:31 crc kubenswrapper[4982]: E0123 09:59:31.827560 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 09:59:44 crc kubenswrapper[4982]: I0123 09:59:44.827266 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 09:59:45 crc kubenswrapper[4982]: I0123 09:59:45.413213 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dj7gd_f717bfd9-1d07-4b04-b34e-9d19ffea876f/kube-rbac-proxy/0.log" Jan 23 09:59:45 crc kubenswrapper[4982]: I0123 09:59:45.699881 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-frr-files/0.log" Jan 23 09:59:45 crc kubenswrapper[4982]: I0123 09:59:45.726495 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dj7gd_f717bfd9-1d07-4b04-b34e-9d19ffea876f/controller/0.log" Jan 23 09:59:45 crc kubenswrapper[4982]: I0123 09:59:45.863273 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-frr-files/0.log" Jan 23 09:59:45 crc kubenswrapper[4982]: I0123 09:59:45.901988 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-metrics/0.log" Jan 23 09:59:45 crc kubenswrapper[4982]: I0123 09:59:45.910354 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-reloader/0.log" Jan 23 09:59:45 crc kubenswrapper[4982]: I0123 09:59:45.931542 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-reloader/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.092450 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-metrics/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.134078 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-reloader/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.134512 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"c8e3d32d348345dbedbbc56eec54c8c6d44967498165412f3892ed5ed9dfead7"} Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.188732 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-frr-files/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.215019 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-metrics/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.307484 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-frr-files/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.406042 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-metrics/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.412453 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/cp-reloader/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.458339 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/controller/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.612278 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/kube-rbac-proxy/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.641562 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/frr-metrics/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.701085 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/kube-rbac-proxy-frr/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.852612 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/reloader/0.log" Jan 23 09:59:46 crc kubenswrapper[4982]: I0123 09:59:46.983357 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-jj6qf_02c1e896-b983-4902-b815-b7f6c74aa735/frr-k8s-webhook-server/0.log" Jan 23 09:59:47 crc kubenswrapper[4982]: I0123 09:59:47.100111 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d4d669f8c-c6qg5_0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1/manager/1.log" Jan 23 09:59:47 crc kubenswrapper[4982]: I0123 09:59:47.230255 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6d4d669f8c-c6qg5_0a9f90e2-e566-43fb-ae76-a9e3fa83c6c1/manager/0.log" Jan 23 09:59:47 crc kubenswrapper[4982]: I0123 09:59:47.345801 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b77b75c7d-w7fhh_0fc52ef0-2aba-4ecf-89fb-5fdc9cbb98b7/webhook-server/0.log" Jan 23 09:59:47 crc kubenswrapper[4982]: I0123 09:59:47.656028 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-b98fj_40a8d301-7add-45a8-8379-cfe8c438a5ce/kube-rbac-proxy/0.log" Jan 23 09:59:48 crc kubenswrapper[4982]: I0123 09:59:48.446536 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-b98fj_40a8d301-7add-45a8-8379-cfe8c438a5ce/speaker/0.log" Jan 23 09:59:49 crc kubenswrapper[4982]: I0123 09:59:49.601975 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kd7nd_d5740314-fbfc-46d5-a79c-7c5168604561/frr/0.log" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.170675 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n"] Jan 23 10:00:00 crc kubenswrapper[4982]: E0123 10:00:00.171709 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="extract-content" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.171725 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="extract-content" Jan 23 10:00:00 crc kubenswrapper[4982]: E0123 10:00:00.171765 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="extract-utilities" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.171773 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="extract-utilities" Jan 23 10:00:00 crc kubenswrapper[4982]: E0123 10:00:00.171815 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="registry-server" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.171823 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="registry-server" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.172116 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b6c886-4556-4588-96ae-3075be55578b" containerName="registry-server" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.172987 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.177862 4982 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.177901 4982 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.183686 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n"] Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.193229 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e729961f-9ef2-47da-9854-324506f29380-config-volume\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.193313 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e729961f-9ef2-47da-9854-324506f29380-secret-volume\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.297113 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e729961f-9ef2-47da-9854-324506f29380-config-volume\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.297184 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnhh\" (UniqueName: \"kubernetes.io/projected/e729961f-9ef2-47da-9854-324506f29380-kube-api-access-nwnhh\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.297235 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e729961f-9ef2-47da-9854-324506f29380-secret-volume\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.298261 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e729961f-9ef2-47da-9854-324506f29380-config-volume\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.306097 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e729961f-9ef2-47da-9854-324506f29380-secret-volume\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.399474 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnhh\" (UniqueName: \"kubernetes.io/projected/e729961f-9ef2-47da-9854-324506f29380-kube-api-access-nwnhh\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.416641 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnhh\" (UniqueName: \"kubernetes.io/projected/e729961f-9ef2-47da-9854-324506f29380-kube-api-access-nwnhh\") pod \"collect-profiles-29486040-tg82n\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:00 crc kubenswrapper[4982]: I0123 10:00:00.504429 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.042999 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n"] Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.262707 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw_6f10ca80-1061-44bf-933f-068fd39b2e3d/util/0.log" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.325424 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" event={"ID":"e729961f-9ef2-47da-9854-324506f29380","Type":"ContainerStarted","Data":"417a5a07ce9a710c0bc8d4d1b3d1e3c835f5d203aec758e215c6ab2fe9e609d6"} Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.598605 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw_6f10ca80-1061-44bf-933f-068fd39b2e3d/util/0.log" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.615245 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw_6f10ca80-1061-44bf-933f-068fd39b2e3d/pull/0.log" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.628499 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw_6f10ca80-1061-44bf-933f-068fd39b2e3d/pull/0.log" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.733010 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw_6f10ca80-1061-44bf-933f-068fd39b2e3d/util/0.log" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.791529 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw_6f10ca80-1061-44bf-933f-068fd39b2e3d/pull/0.log" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.831385 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931asr2dw_6f10ca80-1061-44bf-933f-068fd39b2e3d/extract/0.log" Jan 23 10:00:01 crc kubenswrapper[4982]: I0123 10:00:01.907577 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw_4865d9eb-206e-4e2c-9cc9-343250fefe10/util/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.125393 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw_4865d9eb-206e-4e2c-9cc9-343250fefe10/util/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.126207 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw_4865d9eb-206e-4e2c-9cc9-343250fefe10/pull/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.160872 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw_4865d9eb-206e-4e2c-9cc9-343250fefe10/pull/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.315015 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw_4865d9eb-206e-4e2c-9cc9-343250fefe10/util/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.347525 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw_4865d9eb-206e-4e2c-9cc9-343250fefe10/extract/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.352032 4982 generic.go:334] "Generic (PLEG): container finished" podID="e729961f-9ef2-47da-9854-324506f29380" containerID="b7c7e657a5ef0d219c6cf81499362599251f15721831ff972fd3defb6bc05a21" exitCode=0 Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.352090 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" event={"ID":"e729961f-9ef2-47da-9854-324506f29380","Type":"ContainerDied","Data":"b7c7e657a5ef0d219c6cf81499362599251f15721831ff972fd3defb6bc05a21"} Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.362006 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcs46rw_4865d9eb-206e-4e2c-9cc9-343250fefe10/pull/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.543849 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk_16004dc5-ebcc-4b6e-af9a-770063f0474e/util/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.644519 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk_16004dc5-ebcc-4b6e-af9a-770063f0474e/util/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.702634 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk_16004dc5-ebcc-4b6e-af9a-770063f0474e/pull/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.722755 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk_16004dc5-ebcc-4b6e-af9a-770063f0474e/pull/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.894135 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk_16004dc5-ebcc-4b6e-af9a-770063f0474e/extract/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.894169 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk_16004dc5-ebcc-4b6e-af9a-770063f0474e/pull/0.log" Jan 23 10:00:02 crc kubenswrapper[4982]: I0123 10:00:02.928281 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713mhstk_16004dc5-ebcc-4b6e-af9a-770063f0474e/util/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.092148 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw_20080122-ebd1-4840-979b-77c6aa4d4896/util/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.250429 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw_20080122-ebd1-4840-979b-77c6aa4d4896/pull/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.259428 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw_20080122-ebd1-4840-979b-77c6aa4d4896/util/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.266807 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw_20080122-ebd1-4840-979b-77c6aa4d4896/pull/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.448163 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw_20080122-ebd1-4840-979b-77c6aa4d4896/pull/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.476537 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw_20080122-ebd1-4840-979b-77c6aa4d4896/extract/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.479070 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08g4fbw_20080122-ebd1-4840-979b-77c6aa4d4896/util/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.673849 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqmg7_e227a2fe-1594-4838-b6ed-408bb1730273/extract-utilities/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.788218 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.904440 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqmg7_e227a2fe-1594-4838-b6ed-408bb1730273/extract-content/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.932606 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqmg7_e227a2fe-1594-4838-b6ed-408bb1730273/extract-content/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.944013 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqmg7_e227a2fe-1594-4838-b6ed-408bb1730273/extract-utilities/0.log" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.988776 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e729961f-9ef2-47da-9854-324506f29380-config-volume\") pod \"e729961f-9ef2-47da-9854-324506f29380\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.988983 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e729961f-9ef2-47da-9854-324506f29380-secret-volume\") pod \"e729961f-9ef2-47da-9854-324506f29380\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.989167 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnhh\" (UniqueName: \"kubernetes.io/projected/e729961f-9ef2-47da-9854-324506f29380-kube-api-access-nwnhh\") pod \"e729961f-9ef2-47da-9854-324506f29380\" (UID: \"e729961f-9ef2-47da-9854-324506f29380\") " Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.989606 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e729961f-9ef2-47da-9854-324506f29380-config-volume" (OuterVolumeSpecName: "config-volume") pod "e729961f-9ef2-47da-9854-324506f29380" (UID: "e729961f-9ef2-47da-9854-324506f29380"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.989849 4982 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e729961f-9ef2-47da-9854-324506f29380-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.996162 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e729961f-9ef2-47da-9854-324506f29380-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e729961f-9ef2-47da-9854-324506f29380" (UID: "e729961f-9ef2-47da-9854-324506f29380"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:00:03 crc kubenswrapper[4982]: I0123 10:00:03.996754 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e729961f-9ef2-47da-9854-324506f29380-kube-api-access-nwnhh" (OuterVolumeSpecName: "kube-api-access-nwnhh") pod "e729961f-9ef2-47da-9854-324506f29380" (UID: "e729961f-9ef2-47da-9854-324506f29380"). InnerVolumeSpecName "kube-api-access-nwnhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.091225 4982 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e729961f-9ef2-47da-9854-324506f29380-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.091268 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnhh\" (UniqueName: \"kubernetes.io/projected/e729961f-9ef2-47da-9854-324506f29380-kube-api-access-nwnhh\") on node \"crc\" DevicePath \"\"" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.157932 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqmg7_e227a2fe-1594-4838-b6ed-408bb1730273/extract-utilities/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.171604 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqmg7_e227a2fe-1594-4838-b6ed-408bb1730273/extract-content/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.377140 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" event={"ID":"e729961f-9ef2-47da-9854-324506f29380","Type":"ContainerDied","Data":"417a5a07ce9a710c0bc8d4d1b3d1e3c835f5d203aec758e215c6ab2fe9e609d6"} Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.377186 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417a5a07ce9a710c0bc8d4d1b3d1e3c835f5d203aec758e215c6ab2fe9e609d6" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.377192 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486040-tg82n" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.402580 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4n4bp_ef5466e8-4630-426e-afa1-f7d0605cfe72/extract-utilities/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.610090 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqmg7_e227a2fe-1594-4838-b6ed-408bb1730273/registry-server/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.640605 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4n4bp_ef5466e8-4630-426e-afa1-f7d0605cfe72/extract-content/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.659459 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4n4bp_ef5466e8-4630-426e-afa1-f7d0605cfe72/extract-utilities/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.699650 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4n4bp_ef5466e8-4630-426e-afa1-f7d0605cfe72/extract-content/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.884266 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d"] Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.886005 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4n4bp_ef5466e8-4630-426e-afa1-f7d0605cfe72/extract-utilities/0.log" Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.901016 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-6fj4d"] Jan 23 10:00:04 crc kubenswrapper[4982]: I0123 10:00:04.931976 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4n4bp_ef5466e8-4630-426e-afa1-f7d0605cfe72/extract-content/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.029611 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-4n4bp_ef5466e8-4630-426e-afa1-f7d0605cfe72/registry-server/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.143777 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6fq6t_81c7e93d-c277-4f09-ba56-4603df959c0d/extract-utilities/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.290792 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6fq6t_81c7e93d-c277-4f09-ba56-4603df959c0d/extract-content/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.310504 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6fq6t_81c7e93d-c277-4f09-ba56-4603df959c0d/extract-content/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.325358 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6fq6t_81c7e93d-c277-4f09-ba56-4603df959c0d/extract-utilities/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.566467 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6fq6t_81c7e93d-c277-4f09-ba56-4603df959c0d/extract-content/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.573107 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvlq_c0aeb299-0e42-4d42-bc18-b410ce52234d/extract-utilities/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.576356 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6fq6t_81c7e93d-c277-4f09-ba56-4603df959c0d/extract-utilities/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.709883 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6fq6t_81c7e93d-c277-4f09-ba56-4603df959c0d/registry-server/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.809771 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvlq_c0aeb299-0e42-4d42-bc18-b410ce52234d/extract-utilities/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.839524 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6e8b15-b430-41b2-81aa-5a52a6421f70" path="/var/lib/kubelet/pods/fe6e8b15-b430-41b2-81aa-5a52a6421f70/volumes" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.849279 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvlq_c0aeb299-0e42-4d42-bc18-b410ce52234d/extract-content/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.858281 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvlq_c0aeb299-0e42-4d42-bc18-b410ce52234d/extract-content/0.log" Jan 23 10:00:05 crc kubenswrapper[4982]: I0123 10:00:05.968148 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvlq_c0aeb299-0e42-4d42-bc18-b410ce52234d/extract-content/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:05.999441 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvlq_c0aeb299-0e42-4d42-bc18-b410ce52234d/extract-utilities/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.066367 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckdqx_be123c4f-514d-4704-9814-e01766eeb909/extract-utilities/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.094941 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvlq_c0aeb299-0e42-4d42-bc18-b410ce52234d/registry-server/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.270418 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckdqx_be123c4f-514d-4704-9814-e01766eeb909/extract-content/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.275348 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckdqx_be123c4f-514d-4704-9814-e01766eeb909/extract-utilities/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.298593 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckdqx_be123c4f-514d-4704-9814-e01766eeb909/extract-content/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.461919 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckdqx_be123c4f-514d-4704-9814-e01766eeb909/extract-utilities/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.517909 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckdqx_be123c4f-514d-4704-9814-e01766eeb909/extract-content/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.528536 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dn9p9_ffbef205-368e-4072-b046-7a15121b9a88/extract-utilities/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.638034 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ckdqx_be123c4f-514d-4704-9814-e01766eeb909/registry-server/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.706949 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dn9p9_ffbef205-368e-4072-b046-7a15121b9a88/extract-utilities/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.729595 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dn9p9_ffbef205-368e-4072-b046-7a15121b9a88/extract-content/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.776088 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dn9p9_ffbef205-368e-4072-b046-7a15121b9a88/extract-content/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.909698 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dn9p9_ffbef205-368e-4072-b046-7a15121b9a88/extract-content/0.log" Jan 23 10:00:06 crc kubenswrapper[4982]: I0123 10:00:06.910591 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dn9p9_ffbef205-368e-4072-b046-7a15121b9a88/extract-utilities/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.000108 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fxtrd_79b862fe-c362-4b57-a793-0ef76d91ae90/extract-utilities/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.001795 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dn9p9_ffbef205-368e-4072-b046-7a15121b9a88/registry-server/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.205555 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fxtrd_79b862fe-c362-4b57-a793-0ef76d91ae90/extract-content/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.210993 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fxtrd_79b862fe-c362-4b57-a793-0ef76d91ae90/extract-utilities/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.246384 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fxtrd_79b862fe-c362-4b57-a793-0ef76d91ae90/extract-content/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.407169 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fxtrd_79b862fe-c362-4b57-a793-0ef76d91ae90/extract-utilities/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.409559 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fxtrd_79b862fe-c362-4b57-a793-0ef76d91ae90/extract-content/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.485120 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k6zdf_7a301bdd-53d6-442f-8a9d-0ed62f7ffed0/extract-utilities/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.523927 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fxtrd_79b862fe-c362-4b57-a793-0ef76d91ae90/registry-server/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.674781 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k6zdf_7a301bdd-53d6-442f-8a9d-0ed62f7ffed0/extract-utilities/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.683606 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k6zdf_7a301bdd-53d6-442f-8a9d-0ed62f7ffed0/extract-content/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.719017 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k6zdf_7a301bdd-53d6-442f-8a9d-0ed62f7ffed0/extract-content/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.866709 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k6zdf_7a301bdd-53d6-442f-8a9d-0ed62f7ffed0/extract-content/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.926221 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjsh6_67eaccbd-4800-4599-9778-0b1fd93d1c28/extract-utilities/0.log" Jan 23 10:00:07 crc kubenswrapper[4982]: I0123 10:00:07.926238 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k6zdf_7a301bdd-53d6-442f-8a9d-0ed62f7ffed0/extract-utilities/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.062236 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k6zdf_7a301bdd-53d6-442f-8a9d-0ed62f7ffed0/registry-server/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.158199 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjsh6_67eaccbd-4800-4599-9778-0b1fd93d1c28/extract-utilities/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.196475 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjsh6_67eaccbd-4800-4599-9778-0b1fd93d1c28/extract-content/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.210223 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjsh6_67eaccbd-4800-4599-9778-0b1fd93d1c28/extract-content/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.409867 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjsh6_67eaccbd-4800-4599-9778-0b1fd93d1c28/extract-utilities/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.431422 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rx29v_77c4b8c7-5274-49c3-baff-769e28df27ba/extract-utilities/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.435447 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjsh6_67eaccbd-4800-4599-9778-0b1fd93d1c28/extract-content/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.452863 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mjsh6_67eaccbd-4800-4599-9778-0b1fd93d1c28/registry-server/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.606457 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rx29v_77c4b8c7-5274-49c3-baff-769e28df27ba/extract-content/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.621337 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rx29v_77c4b8c7-5274-49c3-baff-769e28df27ba/extract-utilities/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.645498 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rx29v_77c4b8c7-5274-49c3-baff-769e28df27ba/extract-content/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.842407 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rx29v_77c4b8c7-5274-49c3-baff-769e28df27ba/extract-content/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.846364 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rx29v_77c4b8c7-5274-49c3-baff-769e28df27ba/extract-utilities/0.log" Jan 23 10:00:08 crc kubenswrapper[4982]: I0123 10:00:08.861537 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v2g84_419ace1d-7907-417c-9eff-14fb51d42e31/extract-utilities/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.085754 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v2g84_419ace1d-7907-417c-9eff-14fb51d42e31/extract-content/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.096485 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v2g84_419ace1d-7907-417c-9eff-14fb51d42e31/extract-utilities/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.112161 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v2g84_419ace1d-7907-417c-9eff-14fb51d42e31/extract-content/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.296345 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v2g84_419ace1d-7907-417c-9eff-14fb51d42e31/extract-content/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.299669 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v2g84_419ace1d-7907-417c-9eff-14fb51d42e31/extract-utilities/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.439502 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-v2g84_419ace1d-7907-417c-9eff-14fb51d42e31/registry-server/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.503547 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vfcbn_13e99bc1-f91c-410e-a0bb-64924929a071/extract-utilities/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.706199 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vfcbn_13e99bc1-f91c-410e-a0bb-64924929a071/extract-content/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.714014 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vfcbn_13e99bc1-f91c-410e-a0bb-64924929a071/extract-utilities/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.745530 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vfcbn_13e99bc1-f91c-410e-a0bb-64924929a071/extract-content/0.log" Jan 23 10:00:09 crc kubenswrapper[4982]: I0123 10:00:09.993246 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vfcbn_13e99bc1-f91c-410e-a0bb-64924929a071/extract-content/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.017815 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vfcbn_13e99bc1-f91c-410e-a0bb-64924929a071/extract-utilities/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.023030 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rx29v_77c4b8c7-5274-49c3-baff-769e28df27ba/registry-server/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.119890 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-vfcbn_13e99bc1-f91c-410e-a0bb-64924929a071/registry-server/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.162204 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xmwvp_3e8b4111-8592-42cd-948c-795aff30524f/extract-utilities/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.372939 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xmwvp_3e8b4111-8592-42cd-948c-795aff30524f/extract-utilities/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.390927 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xmwvp_3e8b4111-8592-42cd-948c-795aff30524f/extract-content/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.423923 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xmwvp_3e8b4111-8592-42cd-948c-795aff30524f/extract-content/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.596352 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xmwvp_3e8b4111-8592-42cd-948c-795aff30524f/extract-utilities/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.598954 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xmwvp_3e8b4111-8592-42cd-948c-795aff30524f/extract-content/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.609145 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwh47_534fe1ce-1e6b-4bc4-9091-a6b04d390502/extract-utilities/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.775250 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xmwvp_3e8b4111-8592-42cd-948c-795aff30524f/registry-server/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.801543 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwh47_534fe1ce-1e6b-4bc4-9091-a6b04d390502/extract-content/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.819556 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwh47_534fe1ce-1e6b-4bc4-9091-a6b04d390502/extract-utilities/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.837098 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwh47_534fe1ce-1e6b-4bc4-9091-a6b04d390502/extract-content/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.997488 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwh47_534fe1ce-1e6b-4bc4-9091-a6b04d390502/extract-content/0.log" Jan 23 10:00:10 crc kubenswrapper[4982]: I0123 10:00:10.997612 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwh47_534fe1ce-1e6b-4bc4-9091-a6b04d390502/extract-utilities/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.040126 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5vr5_a37b5a5b-1991-419f-bdf3-106f3b6579d3/extract-utilities/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.118861 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xwh47_534fe1ce-1e6b-4bc4-9091-a6b04d390502/registry-server/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.246332 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5vr5_a37b5a5b-1991-419f-bdf3-106f3b6579d3/extract-utilities/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.273837 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5vr5_a37b5a5b-1991-419f-bdf3-106f3b6579d3/extract-content/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.278553 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5vr5_a37b5a5b-1991-419f-bdf3-106f3b6579d3/extract-content/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.455717 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5vr5_a37b5a5b-1991-419f-bdf3-106f3b6579d3/extract-content/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.484986 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5vr5_a37b5a5b-1991-419f-bdf3-106f3b6579d3/extract-utilities/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.490434 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-58lhz_dd59f7ae-601a-424d-8800-d30e25869a77/marketplace-operator/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.533573 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z5vr5_a37b5a5b-1991-419f-bdf3-106f3b6579d3/registry-server/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.636603 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rpb7j_b4c7860e-1211-4fac-afcc-79f906305ebe/extract-utilities/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.857867 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rpb7j_b4c7860e-1211-4fac-afcc-79f906305ebe/extract-utilities/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.860933 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rpb7j_b4c7860e-1211-4fac-afcc-79f906305ebe/extract-content/0.log" Jan 23 10:00:11 crc kubenswrapper[4982]: I0123 10:00:11.867690 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rpb7j_b4c7860e-1211-4fac-afcc-79f906305ebe/extract-content/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.037781 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rpb7j_b4c7860e-1211-4fac-afcc-79f906305ebe/extract-utilities/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.043415 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rpb7j_b4c7860e-1211-4fac-afcc-79f906305ebe/extract-content/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.068737 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vpdt_c5444979-363c-4b8d-b800-4a9e31d78f3c/extract-utilities/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.236608 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rpb7j_b4c7860e-1211-4fac-afcc-79f906305ebe/registry-server/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.280000 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vpdt_c5444979-363c-4b8d-b800-4a9e31d78f3c/extract-utilities/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.295577 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vpdt_c5444979-363c-4b8d-b800-4a9e31d78f3c/extract-content/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.341756 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vpdt_c5444979-363c-4b8d-b800-4a9e31d78f3c/extract-content/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.471954 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vpdt_c5444979-363c-4b8d-b800-4a9e31d78f3c/extract-utilities/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.486231 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vpdt_c5444979-363c-4b8d-b800-4a9e31d78f3c/extract-content/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.538412 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vg68_948db098-f443-480e-b708-f2593a88e10a/extract-utilities/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.634513 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2vpdt_c5444979-363c-4b8d-b800-4a9e31d78f3c/registry-server/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.748964 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vg68_948db098-f443-480e-b708-f2593a88e10a/extract-content/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.758403 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vg68_948db098-f443-480e-b708-f2593a88e10a/extract-content/0.log" Jan 23 10:00:12 crc kubenswrapper[4982]: I0123 10:00:12.768462 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vg68_948db098-f443-480e-b708-f2593a88e10a/extract-utilities/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.016092 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6ch4z_b4adcec3-f2ed-44e1-b9e0-7cf162f397a4/extract-utilities/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.069543 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vg68_948db098-f443-480e-b708-f2593a88e10a/extract-utilities/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.073940 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vg68_948db098-f443-480e-b708-f2593a88e10a/registry-server/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.091795 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vg68_948db098-f443-480e-b708-f2593a88e10a/extract-content/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.266186 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6ch4z_b4adcec3-f2ed-44e1-b9e0-7cf162f397a4/extract-content/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.282065 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6ch4z_b4adcec3-f2ed-44e1-b9e0-7cf162f397a4/extract-utilities/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.289034 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6ch4z_b4adcec3-f2ed-44e1-b9e0-7cf162f397a4/extract-content/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.487107 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6ch4z_b4adcec3-f2ed-44e1-b9e0-7cf162f397a4/extract-utilities/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.524119 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6ch4z_b4adcec3-f2ed-44e1-b9e0-7cf162f397a4/extract-content/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.531275 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78qnc_f658dfca-d638-4197-8ba8-bd73f08ee2bc/extract-utilities/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.608186 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6ch4z_b4adcec3-f2ed-44e1-b9e0-7cf162f397a4/registry-server/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.779307 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78qnc_f658dfca-d638-4197-8ba8-bd73f08ee2bc/extract-content/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.785350 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78qnc_f658dfca-d638-4197-8ba8-bd73f08ee2bc/extract-content/0.log" Jan 23 10:00:13 crc kubenswrapper[4982]: I0123 10:00:13.793880 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78qnc_f658dfca-d638-4197-8ba8-bd73f08ee2bc/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.014018 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78qnc_f658dfca-d638-4197-8ba8-bd73f08ee2bc/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.027342 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78qnc_f658dfca-d638-4197-8ba8-bd73f08ee2bc/extract-content/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.048646 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9fcd4_bfdd7e8b-ca6f-4741-9801-1690e746df69/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.142723 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78qnc_f658dfca-d638-4197-8ba8-bd73f08ee2bc/registry-server/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.263579 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9fcd4_bfdd7e8b-ca6f-4741-9801-1690e746df69/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.266585 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9fcd4_bfdd7e8b-ca6f-4741-9801-1690e746df69/extract-content/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.280180 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9fcd4_bfdd7e8b-ca6f-4741-9801-1690e746df69/extract-content/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.424947 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9fcd4_bfdd7e8b-ca6f-4741-9801-1690e746df69/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.432357 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9fcd4_bfdd7e8b-ca6f-4741-9801-1690e746df69/extract-content/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.529872 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9g4lb_084d777f-fe2d-498d-b0a7-1cc1e17bf620/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.552650 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9fcd4_bfdd7e8b-ca6f-4741-9801-1690e746df69/registry-server/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.692258 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9g4lb_084d777f-fe2d-498d-b0a7-1cc1e17bf620/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.700972 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9g4lb_084d777f-fe2d-498d-b0a7-1cc1e17bf620/extract-content/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.714727 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9g4lb_084d777f-fe2d-498d-b0a7-1cc1e17bf620/extract-content/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.892583 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9g4lb_084d777f-fe2d-498d-b0a7-1cc1e17bf620/extract-utilities/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.904105 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9g4lb_084d777f-fe2d-498d-b0a7-1cc1e17bf620/extract-content/0.log" Jan 23 10:00:14 crc kubenswrapper[4982]: I0123 10:00:14.907995 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tb5h_db97be89-b4ed-437e-8834-b2ef2de2abc7/extract-utilities/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.009170 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9g4lb_084d777f-fe2d-498d-b0a7-1cc1e17bf620/registry-server/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.142173 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tb5h_db97be89-b4ed-437e-8834-b2ef2de2abc7/extract-utilities/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.149961 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tb5h_db97be89-b4ed-437e-8834-b2ef2de2abc7/extract-content/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.167528 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tb5h_db97be89-b4ed-437e-8834-b2ef2de2abc7/extract-content/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.311897 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tb5h_db97be89-b4ed-437e-8834-b2ef2de2abc7/extract-utilities/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.338749 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tb5h_db97be89-b4ed-437e-8834-b2ef2de2abc7/extract-content/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.420139 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4hbf_b4318f72-acf3-465f-b420-daaa8efd5061/extract-utilities/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.611842 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4hbf_b4318f72-acf3-465f-b420-daaa8efd5061/extract-utilities/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.617808 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tb5h_db97be89-b4ed-437e-8834-b2ef2de2abc7/registry-server/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.626079 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4hbf_b4318f72-acf3-465f-b420-daaa8efd5061/extract-content/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.657165 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4hbf_b4318f72-acf3-465f-b420-daaa8efd5061/extract-content/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.800635 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4hbf_b4318f72-acf3-465f-b420-daaa8efd5061/extract-content/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.836583 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fgt87_dd8cca65-3429-44d6-8da9-3aaf3dda8202/extract-utilities/0.log" Jan 23 10:00:15 crc kubenswrapper[4982]: I0123 10:00:15.871228 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4hbf_b4318f72-acf3-465f-b420-daaa8efd5061/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.030699 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f4hbf_b4318f72-acf3-465f-b420-daaa8efd5061/registry-server/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.056391 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fgt87_dd8cca65-3429-44d6-8da9-3aaf3dda8202/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.088196 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fgt87_dd8cca65-3429-44d6-8da9-3aaf3dda8202/extract-content/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.092757 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fgt87_dd8cca65-3429-44d6-8da9-3aaf3dda8202/extract-content/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.280896 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fgt87_dd8cca65-3429-44d6-8da9-3aaf3dda8202/extract-content/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.295795 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fgt87_dd8cca65-3429-44d6-8da9-3aaf3dda8202/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.305789 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l4q57_0c8dc713-e17d-4ffa-9b86-985b4d80c024/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.380594 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fgt87_dd8cca65-3429-44d6-8da9-3aaf3dda8202/registry-server/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.501280 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l4q57_0c8dc713-e17d-4ffa-9b86-985b4d80c024/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.526923 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l4q57_0c8dc713-e17d-4ffa-9b86-985b4d80c024/extract-content/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.538165 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l4q57_0c8dc713-e17d-4ffa-9b86-985b4d80c024/extract-content/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.692101 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l4q57_0c8dc713-e17d-4ffa-9b86-985b4d80c024/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.719589 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l4q57_0c8dc713-e17d-4ffa-9b86-985b4d80c024/extract-content/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.739773 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqp77_05772d57-547c-4312-9662-be516b1eff34/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.792757 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l4q57_0c8dc713-e17d-4ffa-9b86-985b4d80c024/registry-server/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.922600 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqp77_05772d57-547c-4312-9662-be516b1eff34/extract-utilities/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.944765 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqp77_05772d57-547c-4312-9662-be516b1eff34/extract-content/0.log" Jan 23 10:00:16 crc kubenswrapper[4982]: I0123 10:00:16.967099 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqp77_05772d57-547c-4312-9662-be516b1eff34/extract-content/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.097239 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqp77_05772d57-547c-4312-9662-be516b1eff34/extract-content/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.125035 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqp77_05772d57-547c-4312-9662-be516b1eff34/extract-utilities/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.158818 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9mg8_78b332ef-a088-447d-a6ad-5fa2dee1b475/extract-utilities/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.229715 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nqp77_05772d57-547c-4312-9662-be516b1eff34/registry-server/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.339536 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9mg8_78b332ef-a088-447d-a6ad-5fa2dee1b475/extract-content/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.360386 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9mg8_78b332ef-a088-447d-a6ad-5fa2dee1b475/extract-content/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.384512 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9mg8_78b332ef-a088-447d-a6ad-5fa2dee1b475/extract-utilities/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.533846 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9mg8_78b332ef-a088-447d-a6ad-5fa2dee1b475/extract-content/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.559557 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xfshq_84bf76b1-be1c-4809-9c81-59fea69e437f/extract-utilities/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.583735 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9mg8_78b332ef-a088-447d-a6ad-5fa2dee1b475/extract-utilities/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.826180 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xfshq_84bf76b1-be1c-4809-9c81-59fea69e437f/extract-content/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.831877 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xfshq_84bf76b1-be1c-4809-9c81-59fea69e437f/extract-utilities/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.864086 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xfshq_84bf76b1-be1c-4809-9c81-59fea69e437f/extract-content/0.log" Jan 23 10:00:17 crc kubenswrapper[4982]: I0123 10:00:17.864295 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-w9mg8_78b332ef-a088-447d-a6ad-5fa2dee1b475/registry-server/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.059796 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xfshq_84bf76b1-be1c-4809-9c81-59fea69e437f/extract-content/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.094929 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xfshq_84bf76b1-be1c-4809-9c81-59fea69e437f/extract-utilities/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.140350 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8j4j_4942c45d-21ee-4378-ae42-3b5379bc5dc5/extract-utilities/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.222807 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-xfshq_84bf76b1-be1c-4809-9c81-59fea69e437f/registry-server/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.293900 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8j4j_4942c45d-21ee-4378-ae42-3b5379bc5dc5/extract-utilities/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.333400 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8j4j_4942c45d-21ee-4378-ae42-3b5379bc5dc5/extract-content/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.346312 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8j4j_4942c45d-21ee-4378-ae42-3b5379bc5dc5/extract-content/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.510778 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8j4j_4942c45d-21ee-4378-ae42-3b5379bc5dc5/extract-utilities/0.log" Jan 23 10:00:18 crc kubenswrapper[4982]: I0123 10:00:18.549463 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8j4j_4942c45d-21ee-4378-ae42-3b5379bc5dc5/extract-content/0.log" Jan 23 10:00:19 crc kubenswrapper[4982]: I0123 10:00:19.305214 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8j4j_4942c45d-21ee-4378-ae42-3b5379bc5dc5/registry-server/0.log" Jan 23 10:00:31 crc kubenswrapper[4982]: I0123 10:00:31.019035 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-q4hk4_f469aaaa-fc2d-4b94-8686-754943207b00/prometheus-operator/0.log" Jan 23 10:00:31 crc kubenswrapper[4982]: I0123 10:00:31.032884 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-f79d544bd-l8ng7_b55ae8cf-edc4-4236-999e-e9db7006318f/prometheus-operator-admission-webhook/0.log" Jan 23 10:00:31 crc kubenswrapper[4982]: I0123 10:00:31.067270 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-f79d544bd-xcv49_e540fff1-acc7-4d53-8691-95f7e6ac66b2/prometheus-operator-admission-webhook/0.log" Jan 23 10:00:31 crc kubenswrapper[4982]: I0123 10:00:31.168989 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-dqwbd_bf92fd73-e352-4046-8990-ada508e159a0/operator/0.log" Jan 23 10:00:31 crc kubenswrapper[4982]: I0123 10:00:31.203387 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-8snlk_5abb74fc-c030-4b49-a655-8ef50e5a4a6a/perses-operator/0.log" Jan 23 10:00:49 crc kubenswrapper[4982]: I0123 10:00:49.833064 4982 scope.go:117] "RemoveContainer" containerID="f996d8dd6e3353f272cb6233cc6fd25c99f68b7f7164bd38b791099048b4a806" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.161184 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486041-7g68r"] Jan 23 10:01:00 crc kubenswrapper[4982]: E0123 10:01:00.162466 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e729961f-9ef2-47da-9854-324506f29380" containerName="collect-profiles" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.162481 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="e729961f-9ef2-47da-9854-324506f29380" containerName="collect-profiles" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.162963 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="e729961f-9ef2-47da-9854-324506f29380" containerName="collect-profiles" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.164247 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.217228 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486041-7g68r"] Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.284371 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-fernet-keys\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.284612 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv7xn\" (UniqueName: \"kubernetes.io/projected/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-kube-api-access-tv7xn\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.284802 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-config-data\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.285191 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-combined-ca-bundle\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.386816 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-combined-ca-bundle\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.386938 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-fernet-keys\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.387007 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv7xn\" (UniqueName: \"kubernetes.io/projected/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-kube-api-access-tv7xn\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.387067 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-config-data\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.399692 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-config-data\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.403364 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-fernet-keys\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.405571 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-combined-ca-bundle\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.410113 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv7xn\" (UniqueName: \"kubernetes.io/projected/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-kube-api-access-tv7xn\") pod \"keystone-cron-29486041-7g68r\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:00 crc kubenswrapper[4982]: I0123 10:01:00.506124 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:01 crc kubenswrapper[4982]: I0123 10:01:01.031648 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486041-7g68r"] Jan 23 10:01:01 crc kubenswrapper[4982]: I0123 10:01:01.056065 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-7g68r" event={"ID":"6a828f3f-8f4f-4db8-8ee4-057363c5cd97","Type":"ContainerStarted","Data":"cc2e1b300d4816ba39551e5a00daf463605cee3f2f15c380d74b13585c0ea42e"} Jan 23 10:01:02 crc kubenswrapper[4982]: I0123 10:01:02.067819 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-7g68r" event={"ID":"6a828f3f-8f4f-4db8-8ee4-057363c5cd97","Type":"ContainerStarted","Data":"c314e6ecdd22bf0fc815c08ffa3f97b6b310348b134721cdf6b2f59ceaa8a8c5"} Jan 23 10:01:02 crc kubenswrapper[4982]: I0123 10:01:02.096216 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486041-7g68r" podStartSLOduration=2.096192408 podStartE2EDuration="2.096192408s" podCreationTimestamp="2026-01-23 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 10:01:02.086720762 +0000 UTC m=+7536.567084148" watchObservedRunningTime="2026-01-23 10:01:02.096192408 +0000 UTC m=+7536.576555804" Jan 23 10:01:04 crc kubenswrapper[4982]: I0123 10:01:04.091718 4982 generic.go:334] "Generic (PLEG): container finished" podID="6a828f3f-8f4f-4db8-8ee4-057363c5cd97" containerID="c314e6ecdd22bf0fc815c08ffa3f97b6b310348b134721cdf6b2f59ceaa8a8c5" exitCode=0 Jan 23 10:01:04 crc kubenswrapper[4982]: I0123 10:01:04.091992 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-7g68r" event={"ID":"6a828f3f-8f4f-4db8-8ee4-057363c5cd97","Type":"ContainerDied","Data":"c314e6ecdd22bf0fc815c08ffa3f97b6b310348b134721cdf6b2f59ceaa8a8c5"} Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.606984 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.718230 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-combined-ca-bundle\") pod \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.718315 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-fernet-keys\") pod \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.718385 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-config-data\") pod \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.718405 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv7xn\" (UniqueName: \"kubernetes.io/projected/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-kube-api-access-tv7xn\") pod \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\" (UID: \"6a828f3f-8f4f-4db8-8ee4-057363c5cd97\") " Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.724189 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6a828f3f-8f4f-4db8-8ee4-057363c5cd97" (UID: "6a828f3f-8f4f-4db8-8ee4-057363c5cd97"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.724340 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-kube-api-access-tv7xn" (OuterVolumeSpecName: "kube-api-access-tv7xn") pod "6a828f3f-8f4f-4db8-8ee4-057363c5cd97" (UID: "6a828f3f-8f4f-4db8-8ee4-057363c5cd97"). InnerVolumeSpecName "kube-api-access-tv7xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.751773 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a828f3f-8f4f-4db8-8ee4-057363c5cd97" (UID: "6a828f3f-8f4f-4db8-8ee4-057363c5cd97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.789545 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-config-data" (OuterVolumeSpecName: "config-data") pod "6a828f3f-8f4f-4db8-8ee4-057363c5cd97" (UID: "6a828f3f-8f4f-4db8-8ee4-057363c5cd97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.821409 4982 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.821437 4982 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.821445 4982 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:05 crc kubenswrapper[4982]: I0123 10:01:05.821454 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv7xn\" (UniqueName: \"kubernetes.io/projected/6a828f3f-8f4f-4db8-8ee4-057363c5cd97-kube-api-access-tv7xn\") on node \"crc\" DevicePath \"\"" Jan 23 10:01:06 crc kubenswrapper[4982]: I0123 10:01:06.113750 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486041-7g68r" event={"ID":"6a828f3f-8f4f-4db8-8ee4-057363c5cd97","Type":"ContainerDied","Data":"cc2e1b300d4816ba39551e5a00daf463605cee3f2f15c380d74b13585c0ea42e"} Jan 23 10:01:06 crc kubenswrapper[4982]: I0123 10:01:06.114067 4982 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc2e1b300d4816ba39551e5a00daf463605cee3f2f15c380d74b13585c0ea42e" Jan 23 10:01:06 crc kubenswrapper[4982]: I0123 10:01:06.113797 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486041-7g68r" Jan 23 10:02:11 crc kubenswrapper[4982]: I0123 10:02:11.435389 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:02:11 crc kubenswrapper[4982]: I0123 10:02:11.437264 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:02:29 crc kubenswrapper[4982]: I0123 10:02:29.151989 4982 generic.go:334] "Generic (PLEG): container finished" podID="aab87043-845d-4863-9f94-3b7d67fc113a" containerID="b06d87839d2262015a9072ef30b5b61f3c347d5697db733c52c53f0c214f5781" exitCode=0 Jan 23 10:02:29 crc kubenswrapper[4982]: I0123 10:02:29.152474 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d58d5/must-gather-pzp92" event={"ID":"aab87043-845d-4863-9f94-3b7d67fc113a","Type":"ContainerDied","Data":"b06d87839d2262015a9072ef30b5b61f3c347d5697db733c52c53f0c214f5781"} Jan 23 10:02:29 crc kubenswrapper[4982]: I0123 10:02:29.153707 4982 scope.go:117] "RemoveContainer" containerID="b06d87839d2262015a9072ef30b5b61f3c347d5697db733c52c53f0c214f5781" Jan 23 10:02:29 crc kubenswrapper[4982]: I0123 10:02:29.484482 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d58d5_must-gather-pzp92_aab87043-845d-4863-9f94-3b7d67fc113a/gather/0.log" Jan 23 10:02:38 crc kubenswrapper[4982]: I0123 10:02:38.971398 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d58d5/must-gather-pzp92"] Jan 23 10:02:38 crc kubenswrapper[4982]: I0123 10:02:38.972261 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-d58d5/must-gather-pzp92" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" containerName="copy" containerID="cri-o://273efca37cd1e401cd5cfbc847587adeee00be58e75c4f570ed27c7924c56558" gracePeriod=2 Jan 23 10:02:38 crc kubenswrapper[4982]: I0123 10:02:38.987257 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d58d5/must-gather-pzp92"] Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.258353 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d58d5_must-gather-pzp92_aab87043-845d-4863-9f94-3b7d67fc113a/copy/0.log" Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.259270 4982 generic.go:334] "Generic (PLEG): container finished" podID="aab87043-845d-4863-9f94-3b7d67fc113a" containerID="273efca37cd1e401cd5cfbc847587adeee00be58e75c4f570ed27c7924c56558" exitCode=143 Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.440162 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d58d5_must-gather-pzp92_aab87043-845d-4863-9f94-3b7d67fc113a/copy/0.log" Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.444295 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.493917 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr2cp\" (UniqueName: \"kubernetes.io/projected/aab87043-845d-4863-9f94-3b7d67fc113a-kube-api-access-hr2cp\") pod \"aab87043-845d-4863-9f94-3b7d67fc113a\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.493971 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aab87043-845d-4863-9f94-3b7d67fc113a-must-gather-output\") pod \"aab87043-845d-4863-9f94-3b7d67fc113a\" (UID: \"aab87043-845d-4863-9f94-3b7d67fc113a\") " Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.542034 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab87043-845d-4863-9f94-3b7d67fc113a-kube-api-access-hr2cp" (OuterVolumeSpecName: "kube-api-access-hr2cp") pod "aab87043-845d-4863-9f94-3b7d67fc113a" (UID: "aab87043-845d-4863-9f94-3b7d67fc113a"). InnerVolumeSpecName "kube-api-access-hr2cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.602584 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr2cp\" (UniqueName: \"kubernetes.io/projected/aab87043-845d-4863-9f94-3b7d67fc113a-kube-api-access-hr2cp\") on node \"crc\" DevicePath \"\"" Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.687149 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aab87043-845d-4863-9f94-3b7d67fc113a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "aab87043-845d-4863-9f94-3b7d67fc113a" (UID: "aab87043-845d-4863-9f94-3b7d67fc113a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.705192 4982 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aab87043-845d-4863-9f94-3b7d67fc113a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 10:02:39 crc kubenswrapper[4982]: I0123 10:02:39.841319 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" path="/var/lib/kubelet/pods/aab87043-845d-4863-9f94-3b7d67fc113a/volumes" Jan 23 10:02:40 crc kubenswrapper[4982]: I0123 10:02:40.277662 4982 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d58d5_must-gather-pzp92_aab87043-845d-4863-9f94-3b7d67fc113a/copy/0.log" Jan 23 10:02:40 crc kubenswrapper[4982]: I0123 10:02:40.278282 4982 scope.go:117] "RemoveContainer" containerID="273efca37cd1e401cd5cfbc847587adeee00be58e75c4f570ed27c7924c56558" Jan 23 10:02:40 crc kubenswrapper[4982]: I0123 10:02:40.278390 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d58d5/must-gather-pzp92" Jan 23 10:02:40 crc kubenswrapper[4982]: I0123 10:02:40.314748 4982 scope.go:117] "RemoveContainer" containerID="b06d87839d2262015a9072ef30b5b61f3c347d5697db733c52c53f0c214f5781" Jan 23 10:02:41 crc kubenswrapper[4982]: I0123 10:02:41.435489 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:02:41 crc kubenswrapper[4982]: I0123 10:02:41.435836 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.238855 4982 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-plpxr"] Jan 23 10:02:55 crc kubenswrapper[4982]: E0123 10:02:55.239994 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" containerName="gather" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.240008 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" containerName="gather" Jan 23 10:02:55 crc kubenswrapper[4982]: E0123 10:02:55.240029 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" containerName="copy" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.240035 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" containerName="copy" Jan 23 10:02:55 crc kubenswrapper[4982]: E0123 10:02:55.240052 4982 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a828f3f-8f4f-4db8-8ee4-057363c5cd97" containerName="keystone-cron" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.240058 4982 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a828f3f-8f4f-4db8-8ee4-057363c5cd97" containerName="keystone-cron" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.240259 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" containerName="copy" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.240273 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab87043-845d-4863-9f94-3b7d67fc113a" containerName="gather" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.240281 4982 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a828f3f-8f4f-4db8-8ee4-057363c5cd97" containerName="keystone-cron" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.243508 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.257032 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpxr"] Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.344394 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-catalog-content\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.344445 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-utilities\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.344483 4982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9k8f\" (UniqueName: \"kubernetes.io/projected/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-kube-api-access-p9k8f\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.446938 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-catalog-content\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.446995 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-utilities\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.447051 4982 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9k8f\" (UniqueName: \"kubernetes.io/projected/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-kube-api-access-p9k8f\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.447994 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-catalog-content\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.448348 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-utilities\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.470713 4982 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9k8f\" (UniqueName: \"kubernetes.io/projected/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-kube-api-access-p9k8f\") pod \"redhat-marketplace-plpxr\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:55 crc kubenswrapper[4982]: I0123 10:02:55.577163 4982 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:02:56 crc kubenswrapper[4982]: I0123 10:02:56.161870 4982 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpxr"] Jan 23 10:02:56 crc kubenswrapper[4982]: I0123 10:02:56.523746 4982 generic.go:334] "Generic (PLEG): container finished" podID="4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" containerID="6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8" exitCode=0 Jan 23 10:02:56 crc kubenswrapper[4982]: I0123 10:02:56.523801 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpxr" event={"ID":"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c","Type":"ContainerDied","Data":"6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8"} Jan 23 10:02:56 crc kubenswrapper[4982]: I0123 10:02:56.523851 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpxr" event={"ID":"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c","Type":"ContainerStarted","Data":"9bcc389b73ce5aa181f40cbb170cd9ea396b083db830d25e1b28bf348e4c1d0a"} Jan 23 10:02:56 crc kubenswrapper[4982]: I0123 10:02:56.525782 4982 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 10:02:58 crc kubenswrapper[4982]: I0123 10:02:58.543093 4982 generic.go:334] "Generic (PLEG): container finished" podID="4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" containerID="6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290" exitCode=0 Jan 23 10:02:58 crc kubenswrapper[4982]: I0123 10:02:58.543156 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpxr" event={"ID":"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c","Type":"ContainerDied","Data":"6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290"} Jan 23 10:02:59 crc kubenswrapper[4982]: I0123 10:02:59.559110 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpxr" event={"ID":"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c","Type":"ContainerStarted","Data":"75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0"} Jan 23 10:02:59 crc kubenswrapper[4982]: I0123 10:02:59.587310 4982 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-plpxr" podStartSLOduration=2.117605021 podStartE2EDuration="4.587279199s" podCreationTimestamp="2026-01-23 10:02:55 +0000 UTC" firstStartedPulling="2026-01-23 10:02:56.525467278 +0000 UTC m=+7651.005830664" lastFinishedPulling="2026-01-23 10:02:58.995141456 +0000 UTC m=+7653.475504842" observedRunningTime="2026-01-23 10:02:59.577987649 +0000 UTC m=+7654.058351045" watchObservedRunningTime="2026-01-23 10:02:59.587279199 +0000 UTC m=+7654.067642575" Jan 23 10:03:05 crc kubenswrapper[4982]: I0123 10:03:05.578619 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:03:05 crc kubenswrapper[4982]: I0123 10:03:05.579142 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:03:05 crc kubenswrapper[4982]: I0123 10:03:05.630286 4982 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:03:05 crc kubenswrapper[4982]: I0123 10:03:05.681812 4982 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:03:05 crc kubenswrapper[4982]: I0123 10:03:05.876419 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpxr"] Jan 23 10:03:07 crc kubenswrapper[4982]: I0123 10:03:07.641479 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-plpxr" podUID="4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" containerName="registry-server" containerID="cri-o://75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0" gracePeriod=2 Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.187070 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.291685 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9k8f\" (UniqueName: \"kubernetes.io/projected/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-kube-api-access-p9k8f\") pod \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.291828 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-utilities\") pod \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.291895 4982 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-catalog-content\") pod \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\" (UID: \"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c\") " Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.293048 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-utilities" (OuterVolumeSpecName: "utilities") pod "4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" (UID: "4cb7fb9a-4951-4a4e-8b68-69911e31ba0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.303950 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-kube-api-access-p9k8f" (OuterVolumeSpecName: "kube-api-access-p9k8f") pod "4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" (UID: "4cb7fb9a-4951-4a4e-8b68-69911e31ba0c"). InnerVolumeSpecName "kube-api-access-p9k8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.326717 4982 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" (UID: "4cb7fb9a-4951-4a4e-8b68-69911e31ba0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.395617 4982 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9k8f\" (UniqueName: \"kubernetes.io/projected/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-kube-api-access-p9k8f\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.395931 4982 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.396365 4982 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.656975 4982 generic.go:334] "Generic (PLEG): container finished" podID="4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" containerID="75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0" exitCode=0 Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.657015 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpxr" event={"ID":"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c","Type":"ContainerDied","Data":"75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0"} Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.657032 4982 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpxr" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.657046 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpxr" event={"ID":"4cb7fb9a-4951-4a4e-8b68-69911e31ba0c","Type":"ContainerDied","Data":"9bcc389b73ce5aa181f40cbb170cd9ea396b083db830d25e1b28bf348e4c1d0a"} Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.657063 4982 scope.go:117] "RemoveContainer" containerID="75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.681226 4982 scope.go:117] "RemoveContainer" containerID="6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.690034 4982 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpxr"] Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.700440 4982 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpxr"] Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.721316 4982 scope.go:117] "RemoveContainer" containerID="6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.750924 4982 scope.go:117] "RemoveContainer" containerID="75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0" Jan 23 10:03:08 crc kubenswrapper[4982]: E0123 10:03:08.751310 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0\": container with ID starting with 75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0 not found: ID does not exist" containerID="75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.751339 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0"} err="failed to get container status \"75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0\": rpc error: code = NotFound desc = could not find container \"75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0\": container with ID starting with 75305fa6c67522d4dc9edb508cdd2f552383acf97473599c1f474bab1ea52fb0 not found: ID does not exist" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.751361 4982 scope.go:117] "RemoveContainer" containerID="6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290" Jan 23 10:03:08 crc kubenswrapper[4982]: E0123 10:03:08.751766 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290\": container with ID starting with 6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290 not found: ID does not exist" containerID="6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.751805 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290"} err="failed to get container status \"6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290\": rpc error: code = NotFound desc = could not find container \"6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290\": container with ID starting with 6c4bb4b9a8131e7382323127125e2f87c7c3d8d9bdb3c218a04e05be511c7290 not found: ID does not exist" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.751832 4982 scope.go:117] "RemoveContainer" containerID="6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8" Jan 23 10:03:08 crc kubenswrapper[4982]: E0123 10:03:08.752276 4982 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8\": container with ID starting with 6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8 not found: ID does not exist" containerID="6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8" Jan 23 10:03:08 crc kubenswrapper[4982]: I0123 10:03:08.752301 4982 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8"} err="failed to get container status \"6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8\": rpc error: code = NotFound desc = could not find container \"6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8\": container with ID starting with 6727755cdc2b3b5efafab29d285607cf653f0e3361a0ab383cf13fbe1d6e5eb8 not found: ID does not exist" Jan 23 10:03:09 crc kubenswrapper[4982]: I0123 10:03:09.843330 4982 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb7fb9a-4951-4a4e-8b68-69911e31ba0c" path="/var/lib/kubelet/pods/4cb7fb9a-4951-4a4e-8b68-69911e31ba0c/volumes" Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.435874 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.436257 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.436309 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.437270 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8e3d32d348345dbedbbc56eec54c8c6d44967498165412f3892ed5ed9dfead7"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.437337 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://c8e3d32d348345dbedbbc56eec54c8c6d44967498165412f3892ed5ed9dfead7" gracePeriod=600 Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.688023 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="c8e3d32d348345dbedbbc56eec54c8c6d44967498165412f3892ed5ed9dfead7" exitCode=0 Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.688119 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"c8e3d32d348345dbedbbc56eec54c8c6d44967498165412f3892ed5ed9dfead7"} Jan 23 10:03:11 crc kubenswrapper[4982]: I0123 10:03:11.688376 4982 scope.go:117] "RemoveContainer" containerID="3718d2cfa3f1047724df8afe9e9df7400acc374efcb02f3d444a777b906962a4" Jan 23 10:03:12 crc kubenswrapper[4982]: I0123 10:03:12.703413 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerStarted","Data":"e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684"} Jan 23 10:05:11 crc kubenswrapper[4982]: I0123 10:05:11.435721 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:05:11 crc kubenswrapper[4982]: I0123 10:05:11.436324 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:05:41 crc kubenswrapper[4982]: I0123 10:05:41.435834 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:05:41 crc kubenswrapper[4982]: I0123 10:05:41.436504 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:06:11 crc kubenswrapper[4982]: I0123 10:06:11.435491 4982 patch_prober.go:28] interesting pod/machine-config-daemon-2tlww container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 10:06:11 crc kubenswrapper[4982]: I0123 10:06:11.436010 4982 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 10:06:11 crc kubenswrapper[4982]: I0123 10:06:11.436056 4982 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" Jan 23 10:06:11 crc kubenswrapper[4982]: I0123 10:06:11.437166 4982 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684"} pod="openshift-machine-config-operator/machine-config-daemon-2tlww" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 10:06:11 crc kubenswrapper[4982]: I0123 10:06:11.437238 4982 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerName="machine-config-daemon" containerID="cri-o://e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684" gracePeriod=600 Jan 23 10:06:11 crc kubenswrapper[4982]: E0123 10:06:11.566713 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 10:06:12 crc kubenswrapper[4982]: I0123 10:06:12.067134 4982 generic.go:334] "Generic (PLEG): container finished" podID="5f08e29d-accb-44b3-b466-5edf48e58f17" containerID="e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684" exitCode=0 Jan 23 10:06:12 crc kubenswrapper[4982]: I0123 10:06:12.067219 4982 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" event={"ID":"5f08e29d-accb-44b3-b466-5edf48e58f17","Type":"ContainerDied","Data":"e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684"} Jan 23 10:06:12 crc kubenswrapper[4982]: I0123 10:06:12.067491 4982 scope.go:117] "RemoveContainer" containerID="c8e3d32d348345dbedbbc56eec54c8c6d44967498165412f3892ed5ed9dfead7" Jan 23 10:06:12 crc kubenswrapper[4982]: I0123 10:06:12.068354 4982 scope.go:117] "RemoveContainer" containerID="e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684" Jan 23 10:06:12 crc kubenswrapper[4982]: E0123 10:06:12.068782 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 10:06:26 crc kubenswrapper[4982]: I0123 10:06:26.830058 4982 scope.go:117] "RemoveContainer" containerID="e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684" Jan 23 10:06:26 crc kubenswrapper[4982]: E0123 10:06:26.830987 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17" Jan 23 10:06:38 crc kubenswrapper[4982]: I0123 10:06:38.828077 4982 scope.go:117] "RemoveContainer" containerID="e03c1e4967c0ad6a8ddd6c513cfcb64b0bad8dd85ac7ce1bc401f28dbd860684" Jan 23 10:06:38 crc kubenswrapper[4982]: E0123 10:06:38.828860 4982 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tlww_openshift-machine-config-operator(5f08e29d-accb-44b3-b466-5edf48e58f17)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tlww" podUID="5f08e29d-accb-44b3-b466-5edf48e58f17"